Oracle DB: informations about functions, their parameters, DML and DDL statements
Informations for Oracle Certified Associate - part 1
Chapter 1
Here are the notes I wrote while I was reading OCA/OCP: Introduction to Oracle9i SQL. Might be useful if you want to quickly check some details e.g. which paramters are needed by a specific function.
Categories of SQL statements:
Here are the notes I wrote while I was reading OCA/OCP: Introduction to Oracle9i SQL. Might be useful if you want to quickly check some details e.g. which paramters are needed by a specific function.
Categories of SQL statements:
- DML: Data Manipulation Language (select, update, lock table, ...). Access, modify, create and delete data.
- DDL: Data Definition Language. Define, alter, drop DB objects and their privileges.
- Transaction Control: Group a set of DML statements into a single statement (commit, savepoint, set transation, rollback)
- Session Control: Control properties of a user session (alter session, set role)
- System Control; Manage DB properties (alter system)
Datatypes:
- CHAR:
- Max length = 2000 bytes
- If string is shorter than field length, blanks are inserted at the right side.
- Default size = 1.
- VARCHAR2:
- Max length = 4000 bytes
- No default value (size has always to be specified).
- No blanks are inserted if string is smaller.
- An empty varchar2(2000) takes up the same amount of space as an empty varchar2(2)
- NUMBER ([precision],[scale]):
- precision from 1 to 38
- scale from -84 to 127
- number(5,2): from -999.99 to 999.99
- number(4,2): 1.125 => 1.13 / -1.125 => -1.12
- number(3,5): from -0.00999 to 0.00999
- number: p & s are set to the maximum
- number(6,2): 12345.12345 => error (1 too much on the left side)
- number(6,2): 123456 => error (2 too much on the left side)
- number(6): 1234.98 => 1235
- number(6): 123456.1 = 123456
- number(5,-2): 12345.345 => 12300
- number(5,-2): 1234567 => 1234600
- number(*,1): number(38,1)
- number(4,5): 0.1 => error
- DATE
- Its time component has a resolution of 1 second.
- Needs 7 bytes
- "CENTURY/YYYY/MM/DD/HOUR/MIN/SEC " is contained in every date datatype.
- Date default is DD-MON-YY
- Time default is 00:00:00
- Default date type is specified in the NLS_DATE_FORMAT init parm. It can be changed in the session.
Operators:
An operator is a manipulator applied to a data item in order to return a result.
An operator is a manipulator applied to a data item in order to return a result.
- Unary operator: 2 or {- +}5: it has got only 1 operand (<operator><operand>)
- Binary operators: 5+2 or 2*7: it has got 2 operands
- Arithmetic operators:
- unary operands: +/- (e.g. -5 or +2)
- addition: +
- subtraction: -
- multiplication: *
- division: /
- Concatenation operators: ||
- Used only for strings. Concatenate a '' (0 length) or a null with a string results in a string.
- Set operators
- Union: returns a distinct of all rows from both queries => no duplicate rows.
- Union all: returns everything from both queries (sums them up)
- Intersect: returns only a distinct of the common rows of both queries.
- Minus: only rows returned by the 1st query and not the second one. (as with "not in")
Operator precedence:
The ones with the same precedence are evaluated from left to right. The innermost parentheses are evaluated first.
1) unary (+/-)
2) Multiplication division (*, /)
3) addition, subtraction, concatenation (+, -, ||)
The ones with the same precedence are evaluated from left to right. The innermost parentheses are evaluated first.
1) unary (+/-)
2) Multiplication division (*, /)
3) addition, subtraction, concatenation (+, -, ||)
Literals:
They represent a fixed value (constants).
They represent a fixed value (constants).
- Text / chars
Max length = 4000 bytes - Integer
Without comma. Max size = 38 digits - Number
With comma. Scientivic notation is ok - Interval
See chapter 7.
General informations:
- Column alias names cannot be used in the where condition.
- Inequality operators: "!=", "<>", "^=" are all the same and can all be used.
- In the where-clause, numbers that contain a comma and which start with 0, may be written without 0. E.g.: 0.35 becomes .35.
- ANY/SOME operators: "where num <= any(10,30)" Result: 9,10,11,12,... . It's like many "OR".
- ALL: the field has to satisfy all the values contained in the parentheses: "where num <= all(10, 30)" Result: 9, 10. It's like many "AND".
- With "NULL" only "is null"/"is not null" can be used.
- I know only AND / OR. To generate the opposite behaviour it's possible to use "NOT": "WhERE NOT(NUM=10)". Result: 9,11,12,...
- "!= ALL(10, 30)". Result: 9,11,12,...,29,31,....
- When I use "NOT IN" and the subquery returns any "NULL" row, the result is "NULL".
Sorting (Page 30):
- If "DISTINCT" is used, only the columns of the select statement can be used in the "ORDER BY" clause.
- In ascending order sort, "NULL" rows appear at the bottom of the result. To avoid it: "ORDER BY NUM NULLS FIRST"
Case expression (page 36):
- Select case num when 1 then 'ciao'
when 2 then 'bao'
when 3 then 'boh'
end
from numbers; - select case when num = 1 then 'ciao'
when num = 2 then 'bao'
when num = 3 then 'boh'
end numeri
from numbers;
Chapter 2: SQLPLUS (page 55)
- after writing a query, "/" or "RUN" repeats the command.
- "ED" or "EDIT" or "EDI" has the same effect.
- The buffer file is called "afiedt.buf".
- sqlplus edit commands:
- LIST (L): lists the contents of the buffer
- APPEND (A): appens a line of text to the active line of the buffer (remember to insert an extra blank at the beginning of the string).
- CHANGE (C): C/<oldstring>/<newstring>. Case non sensitive.
- INPUT (I): inserts a new line
- <number> (pag. 62): displays that line of the buffer.
- DEL [linenumber]: deletes the current line or if a number is specified deletes that line.
- "CLEAR BUFFER" (CL BUFF): deletes the whole buffer.
- SQLPLUS script commands:
- "SAVE <filename>": saves contents of the buffer to a file. Doesen't do anything if the file already exists.
- "SAVE <filename> REPLACE": creates a file if it does not exist yet, or overwrites an already existing one.
- "SAVE <filename> APPEND": appends buffer to the end of the file.
- "EDIT <filename>": edits
- "GET <filename>": reads contents of a file and copies it into the buffer.
- "START <filename>" / "@<filename>": executes a script
- "SPOOL <filename>" / "SPOOL OFF": starts or stops spooling
- SHOW ALL: shows all the sqlplus variables.
- SET <varname> <value>: sets sqlplus variable value. Most vars can be abbreviated.
- SQLPLUS variables (pag. 67):
- HEADING: displays column names when select is executed.
- LINESIZE
- PAGESIZE
- NEWPAGE: number of blank lines between pages.
- PAUSE
- SERVEROUTPUT: displays output of DBMS_OUTPUT.PUT_LINE.
- SQLCASE: converts sql commands upper or lowercase just prior to execution.
- SQLTERMINATOR: command used in order to end an execute sql statements (standard is ";").
- SUFFIX: default extention used for files (DEFAULT = .sql)
- TERMOUT: switches off/on output from command files.
- TIMING: displays time needed to execute commands
- UNDERLINE: char used in order to underline column names.
- More than 1 variable can be set at once: "set linesize 500 pause on newpage 1"
- Additional sqlplus commands: page 71 (SQLCODE: displays return code of most recent sql).
- Store current sql environment: "STORE SET <filename>". If you call the file "login.sql" and store it in the same directory as the sqlplus exec file, it will be executed each time you log in.
Columns: - Change heading of a column: "column <original column name> heading "ciao"'.
- Change column width: "column <original column name> format a<charnbr>".
- Long header on two lines: "column <original column name> heading "blabla|blabla"'.
- Format numbers: "column <original column name> format "9,999.99".
- Right/center/left justification of column headings: "column <orig.column name> justify <left/center/right>
- Display settings of a column: "column <orig.column name>"
- Delete column settings: "column <orig.column name> clear".
- Switch on/off column formatting: "column <col.name> on/off".
- Copy properties of a column to another column: "column <orig.column name> like <other col. name> [other options]".
- Avoid double results when multiple lines are returned: "break on <col name> no duplicates.
Headers & footers (pag.78): - Display a title on each page of the output (!check linesize!): "ttitle [justification] <string2display>"
- Display a string at the end of each page: "btitle [justification] <string2display>
- SQL.PNO displays the page number: "ttitle [justification] "<string2display"> sql.pno
- REPHEADER displays a string on the first page.
- REPFOOTER displays a string on the last page.
- "skip <nbr>": skips "nbr" of lines
- All together: "ttitle left "titolo sinistra" skip 1 center "titolo centro""
- temporarily switch off: "ttitle off" / "btitle off" / "repheader off" / "repfooter off"
- "CLEAR" can be used as well in order to:
- clear breaks
- clear columns
- clear buffer
- clear computes
- clear screen
- FEEDBACK: counts the lines that are returned by the query.
- VERIFY: switches off the display of the variable and on the next line, the value in it.
- COMPUTE: perform summary operations on grouped columns (Oracle9):
"compute <operation like sum or avg> label "bla" of <field to be summed> on <summed by which column?>".
This is usually used with "break on".
COMPUTE SUM OF SALARY ON DEPARTMENT_ID BREAK ON DEPARTMENT_ID;
Script variables: - Variables have "&" in front
- The "&" is defined in the "define" sqlplus parameter ("SHOW DEFINE").
- Sqlplus prompts for a variable, only if it is not defined yet. In order to define it use: "define <varname without &> = <value>" OR "undefine <varname without &>". The value will always be saved as "char".
- List all the defined variables: "define".
- List only one variable: "define <varname>".
- Append "ciao" to the contents of the variable &dept: "select * from bla where deptid = '&deptid.ciao' ".
- Variables can substitute anything (fields, literals, "from", tablenames, ...).
- Variables that substitute char or date literals, must be enclosed in single quotes, otherwise the user has to put them when he writes the value.
- Use "&&" in order to save the definition, so that it gets reused. E.g.: "select * from ps_dept_tbl where deptid = &&ciao" (here "ciao" gets saved and the system won't ask again for its value if the statement is runned more then once. The variable survives for the whole session exitstance.).
- If there is more then one variable in the statement, the system starts from the top to the bottom from left to right.
- "undefine" deletes the variable: "undefine ciao".
- With the "accept" command it is possible to create prompts: "accept myvar char prompt "Questo e' il prompt: " [hide]".
- ACCEPT is the only command in sqlplus that can be used to create a non-char datatype variable. DEFINE creates only CHAR datatypes.
ISQLPLUS (Page 93):
- The Isqlplus architecture has 3 layers:
- DB layer: Oracle 9DB and OracleNet
- Middle layer: Oracle HTTP server and Isqlplus server
- Client layer: user interface running on the web
- Multiple sql statements must end with a ";".
- The query can be executed only by pushing on the execute button.
- None of the buffer editing commands is accepted.
- Other commands that are not accepted: page 96.
Chapter 3: Single-Row functions (Page 111)
- There are 5 different types of functions:
- single-row (this chapter)
- aggregate (chapter 4)
- analytical
- object-reference
- programmer-defined
- Functions can be used in the "select", "where" and "order by" clauses. Single-Row functions cannot be used in "having" clauses. Only aggregate functions can be used there. Single-row functions can be used as well in "set" of update statements, "values" of insert statement, or the "where" of a delete statement.
- Single-row functions are divided into the following categories:
- null-value functions (pag. 113)
- character functions (pag 115)
- numeric functions (pag 129)
- date functions (pag 138)
- conversion functions (pag 149)
- other functions (pag 164)
- Null-value functions:
- nvl(X1, X2): Returns X2 if X1 is null. Otherwise X1.
- nvl2(X1, X2, x3): If X1=null returns x3, otherwise X2.
- Character functions:
- ascii(S1): Returns ascii code of first char of S1 (S1 may be a string).
- chr(N1): Returns char of N1 binary code.
- concat(S1, S2): Concatenates S1 and S2. If one of the two is null, the other is returned. If both are null, null is returned.
- initcap(S1): Returns S1 with first char uppercase and remaining ones lowercase.
- instr(S1, S2, [N1], [N2]): Start search in S1 at N1 (integer) and return the position of the N2 occurrence where S2 is found (in S1). Default of N1 and N2 is 1.
- length(S1): Returns length of S1. If S1 is null, null is returned. Therefore, it never returns 0.
- lower/upper(S1): Returns S1 all upper/lowercase.
- lpad/rpad(S1, N1, [S2]): Default value of S2 is a blank. Function takes S1 and if the content is bigger than S2, it truncates it, otherwise it makes it as big as specified in S2 and fills in N1 in the new places (lpad to the left and rpad to the right).
- ltrim/rtrim(S1, [S2]): Default value for S2 is a blank. The function searches s2 in s1 starting from left / right side and takes out from S1 all the S2 chars found until a different one is found.
- replace(S1, S2, [S3]): Default value of S3 is null. Replaces in S1 all S2 by S3.
- substr(S1, N1, [N2]): Default value of N2 is the length of S1 from N1 onwards (only what is after N1 is displayed). If N1 is negative it counts from right to left.
- soundex(S1): phonetic representation of S1.
- translate(S1, S2, S3): If S1, S2 or S3 is null, a null is returned. It replaces in S1 any character of S2 with the ones found in S3. E.g.: "translate('Marco', 'fSe', 'ica')" Result: "ctaiano".
- trim ([[leading/trailing/both] s2 from] s1): If any S=null than null is returned. Default value of S2 is a blank. Returns S1 with all occurrences of S2 removed from one of the sides or both (not from the middle of the string).
- Numeric functions (page 129):
- abs(N1): Returns the absolute value of N1. E.g.: -1 becomes 1. 1 stays 1.
- bitand(N1, N2): Perform on bit field types and operation.
- ceil(N1): Rounds UP N1 to the next integer:
12.51 => 13
12.49 => 13
-12.49 => -12
-12.51 => -12
- floor(N1) does the opposite as ceil().
- exp(N1): Returns e raised to N1. Result has a 36 digits precision.
- ln(N1): Returns natural logarithm of N1. Result has a 36 digits precision.
- log(N1, N2): Returns the logarithm base N1 of N2. Result has a 36 digits precision.
- mod(N1, N2): Returns il resto of N1/N2. If N1 is negative, the result is negative.
- power(N1, N2): Returns N1 to the power of N2.
- round(N1, N2): Rounds N1 to N2 digits of precision:
12.51 => 12.5
12.49 => 12.5
12.45 => 12.5
12.44 => 12.4
-12.44 => -12.4
-12.45 => -12.5
-12.49 => -12.5
-12.51 => -12.5 - sign(N1): Returns -1 if N1 is negative, 1 if positive and 0 if it's 0.
- sqrt(N1): Returns square root of N1
- trunc(N1, N2): Returns N1 truncated to N2 digits of precision to the right of the decimal.
trunc(num, 1): 123.45 => 123.4
123.46 => 123.4
123.44 => 123.4
trunc(num, -1): 123.45 => 120
1234.56 => 1230 - Date functions (page 138):
- The date is stored as a number: [days since 4712 B.C.][fraction of a day (e.g. 0.5 is 12 o'clock)]
- The NLS (National Language Support) parameters of the DB allow me to nationalize by DB.
- The command "alter session set NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS' " will set the format for the implicit DB conversion. The convertion works both ways (by selecting from a table an as well inserting)
- add_months(D, N1): Returns D plus N1 months.
- current_date: Returns current date for the session's time zone (session timezone)
- current_timestamp[P]: Returns the session's timestamp for the session's timezone to P digits of precision (default=6).
Returns date/time/timezone
- localtimestamp: like "current_timestamp" but without timezone.
- dbtimezone: page 142
- extract(C from D): "select extract (year from sysdate) from dual".
Valid values are:
year
month
day
hour
minute
second
timezone_hour (field must support this)
timezone_minute (field must support this)
timezone_region (field must support this)
timezone_abbr (field must support this) - from_tz(A, B): Returns stimestamp for timezone.
- months_between(D1, D2): Returns nbr of months of (D2 - D1) * -1
(1.Dec.2004, 1.Feb.2005) => -2 - new_time(D1, TZ1, TZ2): boh
- next_day(D1, S1): REturns the date (as specified in NLS_DATE_FORMAT) from D1 onwards (excluded D1), when the day specified in S1 (DB-language) will be. E.g.:
If today is 21.12.2004, Tuesday:
select next_day(sysdate, 'Dienstag' (case is not important)) from dual; => 28.12.2004 - round(D1, [format]): Default for format is the date part of NLS_DATE_FORMAT (without HH/MI/SS). It rounds the date + time up/down.
If it's '21-12-2004 16:42:54': select round(sysdate, 'MI') => 21-12-2004 16:43:00 - sessiontimezone: see page 147
- sys_extract_utc: see page 147
- systimestamp: see page 147
- returns a value that includes a time zone
- sysdate
- trunc(D1, [format]): truncates the date. "select trunc(sysdate, 'MI') => 21-12-2004 16:42:00
- tz_offset
- Conversion functions (page 149):
- asciistr(S1): Returns all ascii chars of S1 and the ASCII representation of non-ascii characters of S1. E.g.: select asciistr('sch�tte') => sch\00FCtte
- bin_to_num(B1): Converts bit to numbers.
- cast(E1 as T1): Converts E1 to datatype T1.
STR is a string:
insert into stetemp select cast(sysdate as varchar2(30)) from dual;
select upper(str) from stetemp
No translation is performed. - chartorowid(S1): Converts string S1 to a rowid (Page 153).
- rowidtochar(RID): No translation is performed.
- compose/decompose(S1): Returns S1 as a unicode string in a fully normalized form (Page153).
- convert(S1, DSET, [SSET]): Returns S1 converted from the source character set SSET to the destination character set DSET. Default for SSET is the DB character set.
- hextoraw(H): Converts the hexadecimal string H to a raw datatype.
- rawtohex(R)
- numtodsinterval(N1, S1): Converts N1 to a datetime value specified in S1. E.g.:
Now: 21-DEZ-2004 18:34:12
select sysdate + numtodsinterval(25, 'minute') from dual;
=> 21-DEZ-2004 18:59:12 - to_char(D/N1 [,format] [,nlsparm]):
- D/N1 is a date or a number
- format specifies the format of D/N1
- NLSPARM specifies the language or location formatting conventions.
Returns a string. E.g.:
1)
select to_char(sysdate, 'DD-Month-YYYY', 'NLS_DATE_LANGUAGE = Italian') from dual;
21-Dicembre-2004
(month => dicembre / Month => Dicembre / MONTH => DICEMBRE)
2)
SELECT TO_CHAR(-1234, 'L9G999D99')
-SF 1.234,00 - to_date(S1 [,format] [nlsparm]):
- S1 is a string
- format is the format of S1
- NLSPARM is the language or location
E.g.:
select to_date('Dicembre', 'Month', 'NLS_DATE_LANGUAGE = Italian')
01-DEZ-2004 00:00:00 - to_dsinterval(S1 [,NLSPARM]): page 162
- to_multi_byte(S1): page 162
- to_number(S1 [,format] [,NLSPARM]): page 163
- to_single_byte(S1): page 163
- to_yminterval(S1): page 163
- unistr(S1): page 163
- Other functions (page 164)
- bfilename(S1, S2): Returns type "BFILE_LOC". A kind of pointer to a file. The file doesn't have to exist, when the function is used, but it has to, when the pointer is used.
- coalesce(LIST): Returns the first non-null value of the list. If all of them are null, null is returned. Each item of the list should be the same type.
select coalesce(null, null, 'ciao', 'bao') from dual; => ciao
- decode (page 166): Does not have to return a value (null is returned in this case).
- dump(page 167)
- empty_blob(): Takes no arguments. Used to initialize a blob variable or column.
- empty_clob()
- greatest(LIST): Returns greatest value of the list. If a null is contained, a null is returned.
select greatest('A','B','C') from dual => C - least(LIST): Opposite of greatest. (it's not a grouping function)
- nullif(X1, X2): Returns null if X1=X2 or if X1=null. Otherwise returns X1.
- sys_connect_by_path(X1, C1) (page 170): X1 is a column, C1 is a char.
- sys_context: Replaces function USERENV.
- UID: Returns numeric user id of the current user.
- user: Returns user name of the current user.
- userenv(<option>) (page 176): Option may be:
!!!In Oracle9 use SYS.CONTEXT instead of USERENV!!!
ISDBA
SESSIONID
ENTRYID
INSTANCE
LANGUAGE
LANG
TERMINAL - vsize(X): Returns size in bytes of X.
Chapter 4: Group functions (Page 189)
- Group functions almost never return a null, even if only nulls are evaluated.
- Functions that can be nested in grouping functions are both group and single-rows functions.
- Opposite of "distinct" is "all". "all" is the default.
- Group functions cannot appear in the "where" clauses.
- Programmer-written functions cannot be used on grouped data.
- Functions (page 195):
- avg([distinct/all] N1): Average of N1.
- corr(N1, N2): boh - page 195 - Can return null!!!
- count([distinct/all] */X1):
select count(X1) from boh: Counts all the non-null lines.
select count(*) from boh: Counts all the lines.
select count(distinct *) from boh: Not possible.
- covar_pop(N1, N2): boh (page 197)
- covar_samp(N1, N2): boh (page 198)
- cume_dist(?): boh (page 198)
- dense_rank(?): boh (page 199)
- first: See KEEP
- group_id(): Returns id of duplicate groups (Page 201)
- grouping(EXPR) (page 201): EXPR is an expression contained in the "group by" clause. This function can be used only in queries that have a "group by" + a "cube" or "rollup" clause. It returns a 1 if the line that is displayed is for the column specified, an aggregation.
- keep / last: boh. (page 202
- max/min (page 204): Nulls are not taken into consideration.
- percent_rank: (page 205)
- percentile_count: (page 206)
- percentile_disc: (page 208)
- rank: (page 209)
- regr_avgx(Y, X): Removes first pairs that contain null in either Y or X and calculates afterwards the average of X (can return null!)
- regr_avgy(Y, X): Same as REGR_AVGX, but calculates average of Y.
- regr_count(Y, X): Same as REGR_AVGX, but counts the remaining lines.
- regr_intercept(Y, X): page 212
- regr_r2(Y,X): page 212
- regr_slope(Y, X): page 212
- regr_sxx(Y, X): page 212
- regr_sxy(Y, X): page 213
- regr_syy(Y, X): page 213
- stddev: page 214
- stddev_pop: page 215
- stddev_samp: page 215
- sum([distinct/all] N1): Can return null
- var_pop(X): You cannot use DISTINCT in this function! page 216
- var_samp: page 217
- variance: page 217
- having: As the operations on grouped data have to happen after the data has been grouped, such operations cannot be put in the "where"clause, but have to be put in a separate clause: having.
- cube / rollback aggregates (page 221): Cube creates totals for all possible distinct combinations of the fields that are displayed. The order of expressions is significant.
Rollup creates totals for the column over the one specified in its brackets (hierarchical strucutre). The order of expressions is NOT significant.
Chapter 5: Joins & subqueries (Page 242)
- "Equality join" / "inner join": That means joining two tables using "=" (equality operator).
- When more than 2 tables have to be joined, oracle joins first two of them, and joins afterwards the result to a third table and so on. More than 2 tables are never joined at the same time.
- "Complex joins" are queries that have in the "where" clause not only the join, but as well a condition that limits the rows.
- "select sysadm.ps_dept_tbl.deptid from ps_dept_tbl" will produce an error.
- "select ps_dept_tbl.deptid from ps_dept_tbl is ok.
- "select * from ps_dept_tbl d where ps_dept_tbl.deptid = '0106755' " will produce an error.
- ansi/iso sql joins:
- General infos:
<tblname> natural [inner] join <tblname>
- Here all the columns with the same name of the two tables are automatically joined.
- As all columns with the same name (have to be of same datatype - otherwise unpredictable results are generated) are joined and therefore identical, no table specifier (like a.deptid) HAS to be used for the select clause (page 246).
- The "select *" will display only once each column, even if it's in multiple tables.
- General infos:
- Inner join:
<tblname> natural [inner] join <tblname>
<tblname> [inner] join <tblname> using (<column1>, <column2>, <...>)
<tblname> [inner] join <tblname> on <condition1 --(only tbls mentioned previously can be referred to)--> [and/or/... <condition2> ...] - Outer join:
<tblname> natural full/left/right [outer] join <tblname>
<tblname> full/left/right [outer] join <tblname> on/using ... - Cartesian join:
<tblname> cross join <tblname> - Self-join
<tblname> [inner] join <tblname> on <condition> (not possible to use "natural" or "using") - nonequality join
<tblname> [innter] join <tblname on <condition> (the condition has to include a != or < and so on) - Set operators:
- union
- union all
- intersect
- minus - Subqueries:
- A max of 255 levels of subqueries can be used.
- Types of subqueries:
- Single row: returns 1 row
- Multiple row: returns x rows
- Correlated: join with parent queries � when the subquery joins one of its columns to one of the parent query.
- Scalar: returns 1 value fo each row. Usually used after the "select" clause, but it can be used everywhere � SELECT, WHERE, ORDER BY, CASE, single-row functions.
- Multiple column: "where (bla, bla, bla) in (select bla, bla, bla) ...."
- Subqueries in DML statements (pages 272 + 273):
- "with read only": subq. cannot be updated.
- "with check option": modify only rows that comply to the subquery. In this case a constraint type "v" is added to the data dictionary (you see it in USER_CONSTAINTS. - !!!You cannot have an "order by" in a subquery of a where clause.
Chapter 6: Modifying data (Page 289)
Chapter summary:
- Oracle supports the following DML statements:
- insert (page 291)
- update (page 296)
- merge (page 297)
- delete (page 299) (see truncate DDL page 300)
- select for update (page 303)
- lock table (page 303) - Transaction control (page 309):
- savepoints & partial rollback (page 311)
- consistency & transaction (page 312)
- spec rollback segment for a transaction (page 314)
- Insert:
- I cannot insert into a view that contains any of the following:
- aggregate function
- distinct operator
- set operator
- group/order/connect by clauses
- subquery in the select list - Implicit data convertions are performed where possible. E.g. integer <=> string.
- Insert into multiple rows:
- I cannot insert into a view that contains any of the following:
- Same as classical statement but starts with "INSERT ALL" instead of just "INSERT":
insert all into <tblname> [(<destination columns>)] [values (<source columns>)]
into <tblname> [(<destination columns>)] [values (<source columns>)]
...
select <source columns>
from blabla
insert all into stetemp1 (setid, deptid, descr) values (setid, deptid, 'table one')
into stetemp2 (setid, deptid, descr) values (setid, deptid, 'table two')
select setid, deptid
from ps_dept_tbl - With when clauses:
insert <all/first> when <condition> then <into clause>
when <condition> then <into clause>
...
else <into clause>
select <source cols>
from blabla
-insert first when deptid = '0106755' then into stetemp1(setid, deptid, descr) values (setid, deptid, 'first case')
when deptid = '0892001' then into stetemp1(setid, deptid, descr) values (setid, deptid, 'second case')
else into stetemp1(setid, deptid)
select distinct setid, deptid
from ps_dept_tbl
where deptid in ('0106755', '0892001', '0892002')-
- Update:
Can update multiple columns at once using (...): update stetemp1 set (setid, deptid) = (select '00106', '0106755' from dual) - Merge:
Can update and/or insert rows in a table.
merge into <tblname> using <tbl/vw/subq> [<alias>] on <condition
when matched then update set <expression>
when not matched then insert (<expression>)
values <expression>
Example 1:
merge into stetemp4 x
using stetemp1 y on (y.id = x.id)
when not matched then insert values (y.id, 'insert')
when matched then update set x.name = 'update'
Example 2:
merge into stetemp4 x
using stetemp1 y on (x.id = y.id)
when not matched then insert (x.id, x.name) values (y.id, 'insert')
when matched then update set x.name = 'update' - Truncate:
- It is a DDL statement
- truncate table <tablename> [drop/reuse storage]
- drop (default):
- resets highwatermark
- makes table&index size = to minextent nbr of extents.
- resets NEXT to last deallocated extent
- reuse
- won't change table&index size
- won't reset NEXT
- drop (default):
- The "delete" statement is slow because it generates undo informations.
- When truncate is issued, not committed statements are committed.
- Doesn't fire delete triggers
- "Drop any table" system privilege is needed in order to drop tables that belong to another schema.
- Parent tables cannot be truncated without first disabling foreingn keys.
- Does not invalidate related objects, drop indexes, ...
- Before issuing the truncate statement all foreign key constaints have to be disabled.
- "Select ... for update":
- Locks selected rows.
- Rollback or commit has to be issued in order to release the rows.
- Lock statement:
- lock table <tblname> in <locking mode> [nowait]
locking mode:- exclusive mode: no other session is able to aquire a shared or exclusive lock
- shared mode: no other session is able to get an exclusive lock. Shared locks are allowded.
- E.g.: "lock table ste.stetemp in exclusive mode".
- Data changes require an exclusive mode.
- Deadlocks (page 304): happens when two sessions hold a lock and each other is waiting for the other one.
User 1 time User2
update field1 row1 1 ---
table1
--- 2 update field1 row1
table2
update field1 row1
table1 3 ---
(waiting for user2)
--- 4 update field1 row1
table1
(waiting for user1)
- lock table <tblname> in <locking mode> [nowait]
- Table & row lock exist:
- table lock: can be exclusive or shared
- row lock: is always exclusive
- none of the lock types refuses read access
- Creating or altering an index will always fail if there are uncommitted rows (row exclusive lock) for the table � "WAIT" cannot be specified in DDL statements.
- The REVOKE statement will perform an implicit commit, as TRUNCATE does.
Ways of locking rows & tables:
Type of lock | SS | RX | SRX | S | X |
---|---|---|---|---|---|
SS (Row Share) | Y | Y | Y | Y | - |
RX (Row Exclusive) | Y | Y* | - | - | - |
SRX (Share Row Exclusive) | Y | - | - | Y | - |
S (Share / Table Share) | Y | - | - | Y | - |
X (Exclusive / Table Exclusive) | - | - | - | - | - |
* Exception: if there is an update on a line, the youngest one waits for the oldest one to commit/rollback.
Syntax for explicit lock:
Syntax for explicit lock:
Type of lock | |||||
SS | Lock table xx in | Row | Share | Mode | |
RX | Lock table xx in | Row | Exclusive | Mode | |
SRX | Lock table xx in | Share | Row | Exclusive | Mode |
S | Lock table xx in | Share | Mode | ||
X | Lock table xx in | Exclusive | Mode |
Implicit lock is aquired with:
SS: Select <fieldnames> from <tblname> where ��. for update
RX: Becomes active with "Insert / Delete / Update" statements
All DDL statements wait for all the locks.
Transaction control (pag. 309)
It coordinates multiple concurrent access to the same data.
The consistency can be set to be on "transaction" or on "statement" level. "Transaction" is what starts with an INSERT, UPDATE, MERGE, DELETE, LOCK, SET TRANSACTION or SELECT FOR UPDATE statement and ends with a COMMIT or ROLLBACK.
Statement consistency is the default method. Each single statement accesses the data with the status it had when the single statement was executed.
Transaction consistency: all the statements of a transaction access the data with the status it had when the transaction began.
It coordinates multiple concurrent access to the same data.
The consistency can be set to be on "transaction" or on "statement" level. "Transaction" is what starts with an INSERT, UPDATE, MERGE, DELETE, LOCK, SET TRANSACTION or SELECT FOR UPDATE statement and ends with a COMMIT or ROLLBACK.
Statement consistency is the default method. Each single statement accesses the data with the status it had when the single statement was executed.
Transaction consistency: all the statements of a transaction access the data with the status it had when the transaction began.
In order to read data with a status older than the actual one, the rollback segments that were generated when the change was done, are accessed. SCN (System Change Numbers) are used to implement consistency.
- SET TRANSACTION: Enables the consistency on "transaction" or "statement" level.
- "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE": transaction
- "SET TRANSACTION READ ONLY": transaction. This will make changes to data impossible (only SELECT without update, LOCK TABLE, SET ROLE, ALTER SYSTEN and ALTER SESSION are allowded). A COMMIT or ROLLBACK can be used to end the read-only transaction.
- "SET TRANSACTION ISOLATION LEVEL READ COMMITTED": statement
- ROLLBACK TO [SAVEPOINT]: does a rollback to a specific savepoint. If you've got more than 1 savepoint, and you rollback to the second-last one, the last one is rolled back as well - not possible to jump here and there.
- INSERT INTO STE_TBL VALUES ('ciao');
- SAVEPOINT STE_SAVEP1
- insert into STE_TBL VALUES ('marco');
- SAVEPOINT STE_SAVEP1
- ROLLBACK TO SAVEPOINT STE_SAVEP1
- only "ciao" is visible.
- SET TRANSACTION can be used as well to use during a transaction a specific rollback segment. This might be handy when most of the statements are small, but e.g. during batch processing a couple of statements do large modifications. This way special rollback segments on big discs can be used (performance & space management).
- "SET TRANSACTION USE ROLLBACK SEGMENT ste_large;"
- ste_large (INITIAL 100M NEXT 100M MINEXTENTS 2)
- ste_small (INITIAL 1M NEXT 1M MINEXTENTS 5)
- Therefore it's better that big statements use ste_large
- "SET TRANSACTION USE ROLLBACK SEGMENT ste_large;"
Chapter 7: Managing tables and constraints (pag 333)
USER_TABLES contains the user's tables.
DBA_TABLES contains all the DB tables
ALL_TABLES contains the user's tables and the ones he has access to.
USER_TABLES contains the user's tables.
DBA_TABLES contains all the DB tables
ALL_TABLES contains the user's tables and the ones he has access to.
Oracle built-in datatypes:
- Character
- Numeric
- Date and Time
- Binary
- Row ID
1- Character (pag 337):
Trying to insert a string that is larger than the field, will always return an error.
Trying to insert a string that is larger than the field, will always return an error.
- CHAR
- If the value is smaller than the field, trailing spaces are inserted.
- Default size is 1 byte
- Masimum size is 200 bytes
- Size can be specified in CHARS or BYTES. BYTE is the default
- NCHAR
- Used to store unicode character set
- Size specified alyways in CHARS.
- VARCHAR2 / VARCHAR
- No default value.
- Size can be specified as CHARS or BYTES
- Min / max are 1/4000
- Oracle recommends VARCHAR2.
- NVARCHAR2
- Same as above.
- Size specified in CHARS
- CLOB
- Size does not need to be specified (it's variable-length).
- Max 4GB
- NCLOB
- Unicode version of CLOB.
- LONG
- Discouraged by Oracle.
- Use CLOB instead of LONG.
- Max 2GB
- Max 1 LONG column in each table
- LONG cannot appear in WHERE, GROUP BY, ORDER BY clauses.
2- Numeric (Pag 339)
- NUMBER
- Can be used to represent all the non-Oracle numeric datatypes as integer, long, float, double, decimal, .... .
3- Date and Time (Pag 340)
- DATE
- Stores dates from 1.1.4712 BC to 31.12.9999.
- If a date without time is specified, the default is 00:00:00
- If a date without date is specified, the default is the first day of the current month.
- It stores century, year, month, date, hour, minute and seconds.
- The output can be formatted using the NLS_DATE_FORMAT parameter or by the function TO_CHAR. See chapter 3.
- TIMESTAMP
- The only difference between TIMESTAMP & DATE is the ability to store seconds with a fractional precision of 9 digits. Default is 6 and can range from 0 to 9.
- TIMESTAMP WITH TIME ZONE (pag 341)
- TIMESTAMP WITH LOCAL TIME ZONE - time zone displacement
- INTERVAL YEAR TO MONTH
- Represents an interval as years and months.
- Precision from 0 to 9. Default is 2. It is the precision for the year field.
- INTERVAL YEAR TO SECOND:
- Represents a period of time in days, hours, minutes, seconds.
- [<precision>] specifies the one needed for the day field. Default is 6. min = 0, max = 9.
- Date arithmetic:
- Operators that can be used with date/time datatypes: + -
- Operators that can be used with interval datatypes: + - * /
- "[datatype] + 1": adds one day
- "[datatype] + 0.5": adds 12 hours
- "[datatype] + 1"
- subtraction of two dates gives result in days (number).
- It's not possible to add two datetime values.
- Use function NUMTODSINTERVAL to convert resulting number
4- Binary (Pag 346)
These datatypes store values without converting them into the DB charset. Videos, exe files, etc can be stored.
These datatypes store values without converting them into the DB charset. Videos, exe files, etc can be stored.
- RAW
- Stores up to 2000 bytes
- Is variable-length
- No default value
- Size has to be specified
- BLOB
- Stores up to 4GB
- No size has to be specified
- BFILE
- Up to 4GB
- It contains only a pointer to a file saved in the OS.
- BFILE is the only datatype that stores data outside the DB (CLOB, BLOB, NCLOB).
- LONG RAW:
- Only one long raw column per table
- Supported for backward compatibility only => use BLOB
5- Row ID (pag 347)
- ROWID
- Stores physical address of a row
- 2 types:
- Logical rowid: stores addresses of rows in index-organised tables.
- Physical rowid: stores addresses of rows in ordinary tables (clustered tables, table partitions, subpartitions, indexes, index partitions, subpartitions).
- Physical rowids is fastest access method to a row in a table.
- UROWID
- Stores logical rowid of index-organised tables or non-oracle tables.
- Logical rowid is created using the primary key of index-organized tables.
- Logical rowid changes only if the primary key changes.
Creating tables (pag. 348)
- Table, column names and column aliases (labels) can be max 30 chars long.
- Table and column names can be stored as case sensitive with "". (not recommanded)
- Table and column names have to begin with a letter ($ _ # are not allowded)
- Table and column nambes may contain the signs $ _ #
- When a table is created without specifying its type, a permanent relational table is created.
- Use "CREATE <GLOBAL TEMPORARY> ...." in order to create a table whose definition is available to all sessions in the DB (pag. 350 - did not understand this!). The clauses specify how data has to be handled:
-
-
- "on commit preserve rows": rows are available for the whole session
- "on commit delete rows" (default): rows are available for the whole transaction (until the commit/rollback)
-
-
Specify default values for columns (pag. 351):
- Implicit default value for a column: NULL
- Default values can be as well sysdate, user, userenv, uid
- Default values cannot be level, nextval, currval, rownum, prior
- "create table ste_bla (name varchar2(50) default 'ciao')"
- If I add a column with a default value to a table and the table already has rows, the new column will contain for all rows the default value.
Commenting tables and columns (pag.353):
- "comment on table ste_bla is 'il mio commento'"
- "comment on column ste_bla.col1 is 'colonna uno'"
Creating tables from other tables (pag. 354)
- "create table ste_bla as select * ..."
- Does not work if source table has "long" columns
- No constraints and column defaults are copied. Exception: "not null" constraint.
Modifying tables (pag. 356)
- Add columns:
- Adding just 1 column: "alter table <tblname> add [(]nome varchar2(50) [)]"
- If multiple columns are added, parentheses have to be used: "alter table <tblname> add (nome varchar2(50)), (amount number), ..."
- It is not possible to directly specify "not null" constraint if the table already has rows.
- Modify columns (pag. 358):
- "alter table <tblname> modify <colname> <new attributes>
- "alter table <tblname> modify (<colname> <new attributes>), (<colname> <new attributes>), ... . E.g.: "create table ste_tbl1 (amount number(10,2))" => "alter table ste_tbl1 modify amount number(8, 2)"
- Omitted parts (datatype, default value, constraints) remain unchanged.
- A new default value affect only new rows.
- To remove a default value, set a default of "NULL"
- Rules for modifying column definitions: pag 359
- Drop columns (pag. 359)
- drop 1 column: alter table <tblname> drop column <colname> [CASCADE CONSTRAINTS]
- drop multiple columns: alter table <tblname> drop (<colname1>, <colname2>) [CASCADE CONSTRAINTS]
- Rename columns
- Not possible in Oracle
- Mark column as unused (pag. 360):
- Mark 1 column: "alter table <tblname> set unused column <colname> [CASCADE CONSTRAINTS]". This way the column is marked for deletion.
- Mark multiple columns: "alter table <tblname set unused (<colname1>, <colname2>) [CASCADE CONSTRAINTS]"
- Saves resources. Columns can be dropped later.
- After marking the column as unused, it is not visible with the command "describe <tblname>". They're in DBA_UNUSED_COL_TABS, ALL_UNUSED_COL_TABS, USER_UNUSED_COL_TABS
- Drop a column which is marked as unused:
- Drop all unused columns: "alter table <tblname> drop unused columns"
- Continue a previously interrupted drop operation: "alter table <tblname> drop columns continue"
- Drop tables:
- "drop table <tblname> [CASCADE CONSTRAINTS]"
- marks views, mat views, stored programs that reference the table as invalid.
- Rename table (pag. 361):
- "rename <old name> to <new name>" (possible only for objects owned by the user that executes the command).
- Use "alter table <owner>.<tblname> rename to <newname>" command for objects that belong to other users.
Constraints (pag. 362)
They're created by "create table" & "alter table".
If no name is specified, oracle creates a unique name that starts with "sys_". This is not recommanded as it might become confused.
Types:
If no name is specified, oracle creates a unique name that starts with "sys_". This is not recommanded as it might become confused.
Types:
- NOT NULL: Prevents null values 2b entered
- CHECK: checks if a condition is satisfied
- UNIQUE: checks that there are no duplicate values for the column(s) that are specified
- PRIMARY KEY:
- only 1 PK per table is allowded.
- uniquely identifies each row of the table + prevents null values
- FOREIGN KEY:
- the FK of a table identifies a PK or unique key of another table => establishes a parent/child relationship
All these constraints are applied by "create table" or "alter table". If the name is not specified, the system creates one that starts with "sys_". To specify the name of a constraint, write "constraint <cons name>".
In order to create disabled constraints (they're enabled as default when they're created) you specify the word "disable" after the constraint definition.
- the FK of a table identifies a PK or unique key of another table => establishes a parent/child relationship
The constraint definitions are saved in (pag. 368):
- USER_CONSTRAINTS
- ALL_CONSTRAINTS
- USER_CONS_COLUMNS
- ALL_CONS_COLUMNS
At CS we have a lot of NOT NULL constraints.
NOT NULL:
- "create table ste_tbl (name varchar2(50) [constraint steconst] not null, lastname varchar2(50) not null": the first constraint name is user-defined, but the second one will be "sys_..."
- "alter table ste_tbl modify name null"
- The "not null" constraint is the only one which is take over by a statement like "create table stetable as select * from ciaotable".
CHECK:
It can be defined at the column & table level.
"[CONSTRAINT <consname>] CHECK <condition>"
The condition:
Cannot use environment functions (sysdate, user, ...)
Cannot use pseudo-columns (rownum, currval, nextval, level, ....)
Should evaluate to a boolean result
Can refer to other values in the same row
Cannot use queries
Can contain more than 1 check constraint
Example of checks:
It can be defined at the column & table level.
"[CONSTRAINT <consname>] CHECK <condition>"
The condition:
Cannot use environment functions (sysdate, user, ...)
Cannot use pseudo-columns (rownum, currval, nextval, level, ....)
Should evaluate to a boolean result
Can refer to other values in the same row
Cannot use queries
Can contain more than 1 check constraint
Example of checks:
- At table level: "create table ste_tbl (name varchar2(50), salary number, [CONSTRAINT ciao] check (salary > 0)". (Not possible to use "alter table modify...")
- At column level: "alter table ste_tbl add [CONSTRAINT ciao] check (bonus < salary)"
- Example of a check that is dipendent on another check:
"alter table ste_tbl add [CONSTRAINT ciao] check
(
(bonus is null and salary is null)
or
(bonus is not null and salary is not null)
)
UNIQUE:
Single-column syntax: "[CONSTRAINT <consname>] unique"
Table syntax: ", [CONSTRAINT <consname>] unique (<col1>, <col2>)"
Single-column syntax: "[CONSTRAINT <consname>] unique"
Table syntax: ", [CONSTRAINT <consname>] unique (<col1>, <col2>)"
- If an index for the same columns (in the same order) already exists, oracle uses it (no duplicate values must exist).
- If an index for the same columns doesn't exist yet, oracle creates a unique index.
- Null values are allowded.
PRIMARY KEY (pag 366):
- The same as the "unique" constraint, except nulls are not allowded.
- Only 1 PK for each table.
- Column-level syntax: "[CONSTRAINT <consname>] PRIMARY KEY"
- Table-level syntax: "[CONSTRAINT <consname] PRIMARY KEY (<col1>, <col2>, ...)"
- Oracle creates a unique index It cannot be dropped using "drop index".
- Add a PK: "alter table ste_tbl add [CONSTRAINT <consname>] PRIMARY KEY (<col1>, <col2>, ...).
FOREIGN KEY (pag. 367):
- Ensures that for the combination of columns, there is data available in the primary or unique index of the parent table.
- The data types of the columns in the 2 tables should be the same.
- Column-level syntax: "[CONSTRAINT <consname>] references <tblname> [(<col1>, <col2>, ...)] [on delete cascade/set null]"
- Table-level syntax: ", [CONSTRAINT <consname>] foreign key [(<col1>, <col2>, ...)] references <tblname> [(<col1>, <col2>, ...)] [on delete cascade/set null]"
- Null values are allowded.
- No index is created.
- The "on delete" clause specifies what has to be done when rows are deleted in the parent table.
- delete them in the child table.
- set them to null
If the clause is omitted, no deletes are allowded for the parent table, if the child table contains such rows.
Pag. 368
Dropping constraints (pag. 369):
- When a unique or primary key constraint is dropped, oracle drops the associated unique index.
- It's not possible to disable a PK or unique key if enabled foreign keys reference them. Specify "cascade" to disable them as well.
- As for PK or unique constraints an index is created, it is possible to specify its storage parameters:
"Alter table ste_tbl enable primary key using index tablespace ste_indexes storage (initial 2M next 2M)
Disabling constraints:
- "ALTER TABLE ORDERS DISABLE CONSTRAINT FK_ORDERS;"
Validate constraints (pag. 370)
- The keywords "enable" and "disable" affect only future data. Use "validate" to perform a check on historic data.
- enable validate (default)
- enable novalidate
- disable validate (no dml operations are allowded. Table is read-only)
- disable novalidate (default)
Deferring constraint checks (pag. 371)
The constraint check is done as default when the statement is executed (IMMEDIATE).
If "DEFERRABLE" is specified, the transaction can set the checking-behaviour.
If "DEFERRABLE" is specified, the transaction can set the checking-behaviour.
- "alter table ste_tbl add [constraint ste_cons] primary key (nome) deferrable"
- This way, with the command "set constraints <consname>/<ALL> <IMMEDIATE>/<DEFERRED>", the system can do the constraint check when the statement is executed (IMMEDIATE) or when the transaction ends by a commit/rollback (DEFERRED). In the last case, if the transaction ends, the statements that produced a problem will be rolled back.
- E.G. deferrable initially immediate" means that the constraint can be deferred using the "set constraints" command and that the default behaviour is to do the validation when the sql is executed.
- A default behaviour can be specified when the constraint is created:
"alter table ste_tbl add [constraint ste_cons] primary key (nome) deferrable INITIALLY DEFERRED"
If these options have to be changed, the constraint has to be dropped and recreated � not possible to alter it. - PK and UNIQUE constraints defined as DEFERRABLE will create/use nonunique indexes.
- The default if nothing is specified is "NOT DEFERRABLE, INITIALLY IMMEDIATE".
Chapter 8: Managing tables and constraints (pag 389)
- The tables that the view is referencing are called "base tables".
- A view can be considered as a stored query or a virtual table. It is stored in the data dictionary.
- The maximum number of columns that a view can contain is 1000. The same as with a normal table.
- When numeric operations are performed in the view, the resulting column will be of "floating datatype" (NUMBER without any precision or scale) (I think that there is a length limit when trying to display the number using a function...).
- When an asterisk (*) is used in a view, Oracle in the background saves the view definition with all the single columns and not just an asterisk. Because of this, when there the columns are changed in the base tables, the view should be recreated.
- Subqueries that define views cannot contain the "FOR UPDATE" clause and shouldn't contain "CURRVAL" or "NEXTVAL".
Creating invalid views (pag 393):
- "create FORCE view ste_view as select blabla....". This can be done if the view contains errors, but it still has to be created - its status will be INVALID if it contains any errors. Without the force option, the view won't be created.
- An invalid view that needs to be recompiled, will be automatically recompiled once the view is accessed.
- A view will become invalid if any column in the base table is altered.
Creating read-only views (pag 394):
- "create view ste_view as select * from ste_tbl WITH READ ONLY".
- "WITH READ ONLY" does not allow any DML operations on the view.
Creating constraints on views (pag. 395):
- E.g.: "create view ste_view (name , address, id UNIQUE DISABLE NOVALIDATE) as select name, address, id_num from ste_tbl"
- Constraints on views must always contain the "DISABLE NOVALIDATE" clause.
- Constraints on views are not enforced - they are declarative constraints - they need to be defined on the base tables in order to be enforced.
- The following constraints can be defined on a view: Unique, primary key, foreign key. They are not enforced.
Modifying views (pag. 395):
- "ALTER VIEW" can be used to compile (views are anyway automatically recompiled when they're accessed, but you can force the recompilation using this statement) an invalid view or add/drop constraints.
- "ALTER VIEW ste_view COMPILE".
- If columns are added to the base tables of the view and the code of the view is "select * from ste_tbl", a recompilation WILl NOT ADD THE NEW COLUMN (because Oracle expands the * to all the columns). The view will need to be recreated with "CREATE OR REPLACE VIEW ...".
- "ALTER VIEW ste_view DROP CONSTRAINT ste_view_cons1"
- When the view is recompiled, the dependent objects become invalid.
- "CREATE OR REPLACE VIEW" creates the view if it does not exist yet, or recreates if it exists already.
- If "CREATE OR REPLACE VIEW" is used instead of dropping and recreating the view, the privileges granted on the view are preserved (privileges and user). The dependent stored views and programs become invalid.
- Statements used to modify the DEFINITION of the view:
- Create or replace view
- Create or replace force view
Dropping views (pag 397):
- "DROP VIEW ste_view"
- Views and stored programs that refer to the view become invalid.
DML operations on simple views (pag 400):
- DML operations on a view cannot be performed if the view contains the following:
- DISTINCT
- GROUP BY
- START WITH
- CONNECT BY
- ROWNUM
- UNION, UNION ALL, INTERSECT, MINUS (set operators)
- A subquery in the select clause
- If a view is defined as "create view ste_vw as select deptid from ps_dept_tbl where deptid > '0505001'" and I execute the statement "insert into ste_vw values ('0404001')", the statement WILL WORK.
In order to force a check (create a constraint check) for DML operations on a view, specify "WITH CHECK OPTION" clause: "create view ste_vw as select deptid from ps_dept_tbl where deptid > '0505001' WITH CHECK OPTION [CONSTRAINT ste_cons]".
The forced view check has a "V" in the CONSTRAINT_TYPE of the USER_CONSTRAINTS dictionary view.
DML operations on join views (pag 403):
- A join view is a view with more than 1 base table in the top-level of the from clause.
- It is not possible to update columns that join the base tables, if the "WITH CHECK OPTION" clause is on.
- Key-preserved tables => see pag 404.
- Which operations can be performed on which columns (of tables and views)? Check DBA(/USER)_UPDATABLE_COLUMNS.
Inline views (pag. 406):
- A subquery enclosed in brackets in the FROM clause is called an inline view: "select a.dt from (select sysdate dt from dual) a".
- The subquery in the FROM clause CAN contain an "ORDER BY". (this doesn't work in the WHERE clause of the main statement)
Top-'N' analysis (pag. 409):
- The following sql returns the five biggest OPRIDs: "select oprid from (select * from psoprdefn order by oprid desc) where rownum <= 5". Oracle recognizes this and does not sort the whole subquery.
Chapter 9: Other DB objects (pag. 421)
Sequences:
- They exist only in the data dictionary.
- Check page 423 for the syntax. "CREATE SEQUENCE ste_seq..."
- The following options can be used:
- START WITH: the first number that will be generated. Default is 1 for ascending sequences - for descending the default is -1.
- INCREMENT BY: amount that will be incremented (x) or decremented (-x) each time. Default is 1.
- MINVALUE: lowest number that will be generated. Default is NOMINVALUE.
- MAXVALUE: highest number. Default is NOMAXVALUE for ascending sequences and -1 for descending sequences..
- NO / CYCLE: says if everything should start again after reaching the limit. Default is NOCYCLE (error will be generated if limit + 1 is reached).
- NO / CACHE: NOCACHE updates the data dictionary each time a number is generated (performance is bad). CACHE defines the block of sequence numbers held in memory (default is 20) (fast - update is done once every 20 times with a "+20" operation). The default is CACHE, which preallocates 20 values in the SGA.
- When a sequence is created, "START WITH" must be equal or greater than MINVALUE.
- "DROP SEQUENCE bla_seq" deletes the sequence.
- "<seq_name>.NEXTVAL" increases the current number and selects it.
- "<seq_name>.CURRVAL" selects the current number. If no number has been generated yet, CURRVAL will be undefined and an error will be returned (so, execute first a NEXTVAL)
The statement "selectlast_number from all_sequences where sequence_name = 'ste_seq'" displays the greatest number in the cache. - A "ROLLBACK" doesn't restore the previous value of the sequence, as users shouldn't be waiting for each other.
- (Pag. 428) Sequences cannot be altered directly. If nextval has to be re-set, use one of the following approaches:
- Drop and recreate it (grants are lost and dependent objects are invalidated)
- Select NEXTVAL from it until it reaches the desired value.
- Alter the sequence by changing "INCREMENT BY" ("ALTER SEQUENCE ste_seq INCREMENT BY 12345"), select NEXTVAL, and re-set "INCREMENT BY" to the original value.
Synonyms (pag. 429):
- It is an alias for a DB object.
- Public synonym is available to everybody.
- A synonym can be of a table, view, sequence, procedure, function or package. In the local database or via DB-link.
- Synonyms do not become invalid if the objects they point to are dropped.
- Synonyms can be created to objects that do not exist or the owner does not have privileges.
- "CREATE [PUBLIC] SYNONYM <syn_name> FOR [schema.]<obj_name>[@db_link]"
- "DROP [PUBLIC] SYNONYM <syn_name>". (if it's a public synonym, PUBLIC HAS to be specified)
- If I create a private synonym called "sinonimo", somebody else can execute a statement like "select * from ste.sinonimo". It behaves as a normal object. Anyone who has access to the original object can access "ste.sinonimo".
- This is the order of how Oracle searches for an object (pag. 432):
- Search for an object owned by the current user.
- Search for a private synonym owned by the current user.
- Search for a public synonym.
DB-Links (pag. 433):
- "CREATE [SHARED] [PUBLIC] DATABASE LINK <dblink_name> [CONNECT TO <usrname> IDENTIFIED BY <pwd>] USING <'tns_name'>".
- If "connect to" is specified, the password is stored unencrypted in the data dictionary (DD-view USER_DB_LINKS or table SYS.LINK$). It it is not specified, the user will be prompted for a password.
- A private DB-link is not available to other users.
Create and manage indexes (pag 434):
- Usually Oracle retrieves rows from a table by
- ROWID
- Full Table Scan
- If all the columns of the sql are contained in an index, Oracle does not need to access the table, but just the index.
- Indexes improve performance for data retrieval, but degrade performance for DML operations.
- B-Tree indexes:
- They're the most common used.
- They can be unique or nonunique and simple (1 column) or concatenated (multiple columns).
- Good performance with high-cardinality columns.
- Good in order to retrieve a small number of rows (<10% of the table. If more, than FTS).
- Based on binary trees which contains branch blocks and leaves blocks.
- Branch blocks contain the key (index columns) and an address to other index block.
- Leaf blocks contain the key and the rowid(s) of the table.
- Leaf blocks are doubly linked list, so that a range scan can be done in both directions.
- If a leading column in the index is not used in the sql, but later columns are, the later columns cannot be used during the index scan (io non penso pero' - mah).
- Bitmap indexes (pag. 436):
- They're primarily used for static data and decision-support systems (because they don't support ro-level locking).
- They can be simple (1 column) or concatenated (multiple columns). Actually they're almost always simple (best performance).
- Good performance with low- to medium cardinality columns.
- Multiple bitmap indexes can be combined with AND and OR conditions.
- The bitmaps are stored in the leaf blocks (probably id doesn't have branch blocks).
- Each line has a bitmap which may contain TRUE, FALSE or NULL.
- Best thing is to create a bunch of simple bitmap indexes. This way if the query changes, the indexes can still be used.
Call stored programs (pag. 439):
- They can be written in PL/SQL or Java.
- They can be saved as procedures, functions, packages or triggers. They may contain procedural code with loops, cursors and conditional branching.
- Procedures and Functions:
- They are "named programs".
- They're stored in the database.
- They both take 0 or more parameters.
- Functions return always a value (Java methods that return values are called as functions).
- Procedures return 0 or more values (through their parameter list) (void Java methods are called as procedures).
- Procedures are called from sqlplus with an "exec" in front of it. E.g.:
- "exec dbms_stats.gather_table_stats(<parm1>, <parm2>, ...)
- Functions are called anywhere an expression is valid. E.g.:
- in an assignment: "marco : = open_orders(sysdate, 30);"
- in an expression: "if (open_orders(sysdate, 30) < 500) then..."
- in a default value assignment: "declare marco NUMBER default open_orders(sysdate, 30)...."
- in a sql: "select * from ste_tbl where open_orders(sysdate, 30) < 500"
- in the parms of another program: "process_vendor(vendor_id, open_orders(vendor=>vendor_id))"
- Passing parameters:
- Two ways of passing parameters:
- Positional notation
- E.g.: "dbms_utility.analyze_schema('HR', 'ESTIMATE', NULL, 10)"
- Named notation
- E.g.: "dbms_utility.analyze_schema(schema=>'SCOTT', estimate_percent=>10, method=>'ESTIMATE'"). Note that I did not need to specify this time the NULL value and that the order can be different.
- The names of the parameters are contained in the package specification.
- Positional notation
- Two ways of passing parameters:
- Packages
- They're containers of procedures, functions and/or data structures.
- Packages have:
- a package specification:
- contains function + procedures headers, externally visible data structures.
- a package body:
- contains the declaration, executable and exception sections of all the procedures and functions contained in it.
- a package specification:
- Differences between packaged and nonpackaged programs (pag 442):
- Package data:
- persists for the duration of the session of the user and accross the commits in the session.
- granting an execute privilege on a package grants it for everything contained in it (functions, procedures, ...). It's not possible to grant access to only one thing in the package.
- they support "overloading" (multiple programs with same name - which one to be called depends on parms nbr and type)
- Package data:
Chapter 10: Security (pag. 455)
- "CREATE USER ..." creates a new user.
- Authentication can be done in three ways:
- DB authentication (pag 457): userid and password are ok for that DB
- It is the default type of account.
- "CREATE USER marco IDENTIFIED BY ciaobao1"
- The keyword "identified by" tells Oracle that this is a DB-authenticated type of account.
- External authentication (pag 457): userid is ok for that DB. The password is checked by the OS or the network.
- These kind of accounts are sometimes called OPS$. This because the standard prefix for such userids is OPS$ (configured in the init.ora file as parameter "os_authent_prefix"). E.g.:
"CREATE USER ops$marco IDENTIFIED EXTERNALLY" - The parameter os_authent_prefix is frequently set to blank. This way, an account creation would look like this:
"CREATE USER marco IDENTIFIED EXTERNALLY". - These types of accounts are used usually in cron and batch jobs, so that passwords don't need to be saved somewhere in the system.
- These kind of accounts are sometimes called OPS$. This because the standard prefix for such userids is OPS$ (configured in the init.ora file as parameter "os_authent_prefix"). E.g.:
- Global authentication (pag 457)): userid is ok for that DB. The password is checked by the "Oracle Security Service"
- In this case a password is checked by a "X.509" enterprise directory service. This service is most common in big organizations with single sign-on systems.
- "CREATE USER marco IDENTIFIED GLOBALLY AS 'CN=marco, OU=KLFA11, 0=CreditSuisse, C=CH'"
- DB authentication (pag 457): userid and password are ok for that DB
- Assign attributes to accounts (pag 458):
- Password: can be changed with "ALTER USER marco IDENTIFIED BY ciaobao1"
- Default tablespaces: the default is the SYSTEM tablespace.
"CREATE USER marco IDENTIFIED BY ciaobao1 DEFAULT TABLESPACE our_tables"
"ALTER USER marco DEFAULT TABLESPACE our_tables" - Temporary tablespace: the default is the SYSTEM tablespace.
"CREATE USER marco IDENTIFIED BY ciaobao1 TEMPORARY TABLESPACE temp".
"ALTER USER marco TEMPORARY TABLESPACE temp"
To change it for all the users: "ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp" - It is not possible yet to set default tablespaces for indexes.
- Tablespace quotas:
- Apparently no default value?
- The amount can be specified in bytes (e.g. 1024), in Kilobytes (1K) and in Megabytes (1M).
- "CREATE USER marco IDENTIFIED BY ciaobao1
DEFAULT TABLESPACE user
QUOTA UNLIMITED ON user
QUOTA 20M ON indexes" - "ALTER USER marco QUOTA 2500K ON indexes" (by default the quota is measured in bytes)
- Profile (pag. 459) => see pag 483 for further details:
- The default profile allows unlimited resources.
- This option needs to be activated as well in the file "init.ora": "resource_limit true"
- Limits the resource usage (resources may be connect time, idle time, logical reads per session, failed login attempts, password verification function, ...). It is possible to limit the number of concurrent sessions. It is NOT possible to limit the number of physical reads, as it depends on how fit the cache is.
- "ALTER PROFILE default LIMIT idle_time 5". Sets a maximum idle time of 5 minutes for the people who have a default profile.
- "CREATE USER marco IDENTIFIED BY ciaobao1 PROFILE superuser"
- "ALTER USER marco PROFILE superuser
- alter profile oe limit failed_logon_attempts 3;
- password_grace_time is the number of days after that the password expires, that the user is reminded to change it, but is still able to log into the system.
- Password_life_time are the number of days of duration of the password � password_grace_time is not included here.
- Total = password_life_time + password_grace_time.
- Roles (pag. 460) => see pag 463 for how to create roles:
- "ALTER USER marco DEFAULT ROLE ALL EXCEPT readonlyuser"
- Roles cannot be enabled when the user is created.
- Not possible to know who granted dba role to somebody.
- Passwords and accounts (pag460 461)
- Passwords can be forced to expire:
- ALTER USER marco IDENTIFIED BY pwdpwd
- ALTER USER marco PASSWORD EXPIRE
- CREATE USER marco IDENTIFIED BY welcome PASSWORD EXPIRE
- An account can be locked per example, when that user owns tables, but nobody is supposed to login as that user.
- ALTER USER marco ACCOUNT LOCK
- To unlock it: ALTER USER marco ACCOUNT UNLOCK
- It is possible to check if the terms of the licenses are being respected. The parameters "license_max_users", "license_sessions_warning" and "license_max_sessions" of the init.ora file control them.
- license_max_users
It is compared to the result of the query "select count(*) from dba_users". System users (SYS, SYSTEM, and so on) are counted. If the nbr of users is higher than this parameter, the next time the db is started, an alert log is written. - license_sessions_warning
It is the maximum number of concurrent users. If the value is exceeded, a warning is written in the alert log file (background_dump_dest\alert.log) - license_max_sessions
If this value is exceeded, two things happen:
If the user that wants to login is a dba, he's still allowded to login, but he will get an error message.
Otherwise, the logon fails with an error message and a warning in the alert log file.
- license_max_users
- Passwords can be forced to expire:
- Roles: creating and using them (pag. 463).
- Privileges can be granted to a role. A role can be granted to another role or a user. Roles are used only to administer privileges.
- By default a role doesn't have a password.
- CREATE ROLE mioruolo [IDENTIFIED <BY mypassword / EXTERNALLY / GLOBALLY> ]
- If a role is created with a password (blabla... IDENTIFIED BY ....), the role is disabled by default.
- To enable a role: "SET ROLE rolename IDENTIFIED BY mypassword" (pag 464). Don't know how to disable a role.
- Privileges: granting and revoking them (pag 464)
- They allow a user to access or execute sombody else's objects or programs. As well to perform system-level operations like create and drop objects.
- Privileges can be granted to a user by granting them to the user PUBLIC or to a role.
- There are tree types of privileges:
- Object privileges
Access tables, functions, libraries, ... - System privileges
Connecting to the DB, alter user session, create tables, create users, ...
Query the data dictionary view SYSTEM_PRIVILEGE_MAP in order to see all the system privileges. - Role privileges
Privilege that the user has because he has the role.
- Object privileges
- Table of objects and privileges that can be assigned to them (pag 465):
alter, delete, execute, index, insert, read, reference, select, update - List of system privileges (pag 468):
- cluster, database, index, procedure, profile, role, rollback segment, sequence, session, synonym, table, tablespace, user, view.
- Special system privileges: SYSDBA and SYSOPER (pag 473)
- Other system privileges (pag 473): analyze any, audit any, comment any, grant any privilege, grant any role.
- Assign privileges:
- GRANT <role / system privilege> TO <user / role / PUBLIC> [WITH ADMIN OPTION]
- GRANT <object privilege> TO <user / role / PUBLIC> [WITH GRANT OPTION]
- GRANT ALL TO <user / role / PUBLIC> [WITH GRANT OPTION]. The "ALL" contains the following privileges:select, insert, update, delete, alter, index and reference. "TRUNCATE" is not a table privilege and is not included in the ALL.
- Once a privilege is granted, there is no need for a logoff+login. It is active immediately.
- The "WITH GRANT OPTION" makes it possible for the receiver of the access, to be able to grant that access to anybody else. If, after granting the access to somebody else, the user who granted it is dropped, the receiver loses the access. This is possible because in the data dictionary both the receiver and the granter are kept. Page 474.
- Granted access records as well from whom the access was given. If an access to the same object is granted to a user from two different users, both of them have to revoke the access � if just the first user revokes it, the user still has access (because of the second user).
- The "REFERENCE" privilege can be granted only to a user.
- There is no REVOKE ANY system privilege.
- Assigning system + role privileges (WITH ADMIN OPTION):
- In this case, the granter of the privilege (all roles) is not recorded. This means that once that the receiver got the access, he keeps it even if the granter is dropped. If anybody revokes this privilege, the privilege is gone.
- Assigning role privileges (pag 477):
- Any combination of system, object and role privileges can be granted to a role.
- Read about stored sql on page 477.
- Revoking privileges:
- REVOKE <role / system privilege / object privilege> from <user / role / PUBLIC>
- Object privileges (e.g. update on a table) can be revoked only by the grantors.
- If I want to revoke only the "WITH ADMIN OPTION" or the "WITH GRANT OPTION", the whole privilege must be dropped and reassigned. However, this might cause problems if the receiver already granted to somebody else an object privilege, as the access will be lost. A chain reaction is started.
- Data dictionary views (pag 480):
- DBA_TAB_PRIVS contains a list of the privileges granted.
- ALL_TAB_PRIVS_MADE contains a list of the privileges on another user's procedure granted to somebody else.
- List of data dictionary views: pag 480 + 481 + 482
A grant to a single of a table is assigned as follows: "GRANT UPDATE (name, age) on stetable to bruna".
Managing user groups with profiles
There are two categories of resource settings:
There are two categories of resource settings:
- Password
- These settings are always active.
- Kernel
- They're active only if they're explicitly enabled in the oracle configuration file (init.ora - resource_limit=TRUE) or dinamically (alter system set resource_limit=TRUE).
Create and alter profiles:
- Each database starts with a profile called DEFAULT. It has all parameters set to "unlimited".
- Syntax of "CREATE PROFILE" statement => page 483.
- When a new profile is created, the parameters that are not explicitly limited, are inherited from the DEFAULT profile.
- The settings of the DEFAULT profile can be changed using the ALTER statement.
- Not specifying a parameter or specify it with the DEFAULT value, has the same effect.
- UNLIMITED means that no limit is used for that resource.
- Syntax of "ALTER PROFILE" statement => page 484.
- If the DEFAULT profile is altered, all other profiles that use the parameters of the DEFAULT profile will inherit the change.
- UNLIMITED means that no limit is used for that resource.
Kernel resource settings: page 485
Password resource settings: page 488
More:
- Generate hash values in Oracle >= 10: select ora_hash('hello world') from dual
Oracle DB: informations about DML and DDL statements, datafiles, tablespaces, indexes, etc...
Informations for Oracle Certified Associate - part 2
Oracle Database fundamentals 2
Here some notes i took about Oracle 9i while reading OCA/OCP: Oracle9i DBA Fundamentals I. Stuff like database structure, names of the DB processes, etc... .
Here some notes i took about Oracle 9i while reading OCA/OCP: Oracle9i DBA Fundamentals I. Stuff like database structure, names of the DB processes, etc... .
Basics
SQL: Functional programming language (write the desired outcome).
C: Procedural programming language (write process used to get the outcome).
RDBMS: (Relational DB management system) Translates SQL into a process which Oracle can use.
Select statements are processed as follows:
SQL: Functional programming language (write the desired outcome).
C: Procedural programming language (write process used to get the outcome).
RDBMS: (Relational DB management system) Translates SQL into a process which Oracle can use.
Select statements are processed as follows:
- Open cursor
- Search shared pool and see if SQL is there (library cache)
- Validate SQL syntax
- Validate tables & columns
- Acquire locks
- Check user privileges.
- Parse statement if SQL does not exist in library cache (gen. expl. plan)
- Execute statement. The server process executes it and writes into buffer cache
- Fetch values from cursor & return result to user process
- Eventually redo point 9 if requested.
DML (update + delete) statements are processed as follows:
- Points 1 to 8 of select
- Get data from disk into buffer cache
- Get lock on this data
- Do DML in buffer cache (dirty blocks are generated)
- Get lock on undo segment.
- Write new & old version into undo.
DML (insert) statements are processed as follows:
- Points 1 to 8 of select.
- Get a block from disk into buffer cache.
- Insert new row into block.
- Get a lock on the undo segment.
- Write old & new version of data into the undo segment.
- Write redo entry into redo log buffer.
Advantage of copying data into buffer cache: 1) Ram faster than HD 2) User doesn't have to wait for HD-write.
Whe a commit is issued:
- Locks on the table and/or rows are released.
- Locks on undo segments are released. When this space is needed again later, these informations are overwritten.
- Redo entry saying that transaction was committed is generated.
SCN: (system change number) Is generated for each transaction.
Whe a commit is issued:
- Locks on the table and/or rows are released.
- Locks on undo segments are released. When this space is needed again later, these informations are overwritten.
- Redo entry saying that transaction was committed is generated.
SCN: (system change number) Is generated for each transaction.
Logical components of a DB
Tablespace | |||||||
Segment | Segment | ||||||
Extent | Extent | Extent | Extent | ||||
Block | Block | Block | Block | Block | Block | Block | Block |
- Block:
- its size must be a multiple of the OS block size.
- the block size is defined at DB-creation by setting the parm DB_BLOCK_SIZE.
- Extent: group of continuous blocks
- Segment:
- Segment:
- group of extents.
- multiple extents for the same segment can be on different data files of the same tablespace.
- One table has 1 segment (exception: partitioned tables - as well with tablespaces)
- There are 4 types of extent:
- data
- index
- temporary
- undo
- Tablespace: group of segments.
Physical components of a DB
- data files
- redo log files (*can be multiplexed)
- control files (*can be multiplexed)
1 datafile -> 1 tablespace
1 tablespace -> n datafiles
1 tablespace -> n datafiles
See pag. 118 for syntax of "create database":
- REUSE: overweites existing files, otherwise error if file already exists.
- LOGFILE: location of online redo files.
- MAXLOGHISTORY: RAC - Max nbr of archived redo log files.
- MAXDATAFILES: Max nbr of data files for each instance.
- DB_FILES: Max data files accessible for each instance.
- NOARCHIVELOG: defaults.
- UNDO TABLESPACE: used only if parm UNDO_MANAGEMENT=AUTO.
Data dictionary
- User SYS is the owner of the data dictionary.
- It is created in the SYSTEM tblspace.
- SQL.BSQ creates:
- User SYS is the owner of the data dictionary.
- It is created in the SYSTEM tblspace.
- SQL.BSQ creates:
SYSTEM tblspace.
SYSTEM rollback segment in the SYSTEM tblspace
SYS & SYSTEM users
Dictionary tables, clusters, indexes...
Roles PUBLIC, CONNECT, RESOURCE, DBA, ....
DUAL table (the owner is SYS user)
- CATALOG.SQL:
Connect as SYS and run the script.
Creates views & synonyms
- CATPROC.SQL: PL/SQL items
Memory structures
- SGA (System Global Area - shared) (Total size defined by parm SGA_MAX_SIZE)
- It is created at instance startup (STARTUP NOMOUNT is enough).
- The size is allocated using granules. If the SGA size is < 128MB, the granule size is 4MB, otherwise 16MB.
- A minimum of 3 granules are allocated: 1 for Shared pool, 1 for buffer cache and 1 for redo log buffer.- Shared pool (size defined by parm SHARED_POOL_SIZE)
- Library cache
- Divided into a shared (sql parse tree and exec plan) and a private area (bind vars) - Dictionary cache
- contains object definitions.
- is the only one which keeps infos in a row structure instead of a buffer (block). Referenced sometimes as row cache. - Control structs
- Library cache
- Buffer cache (see V$BUFFER_POOL)
Where rows are read into or changed/inserted/deleted before writing them to data files.- Keep area
- This data will stay in memory and will not age out. - Recycle area
- Lines in here will be thrown out as soon as possible. - Default area
- This data uses a LRU algorithm do age out data. Data that does not fit into keep and recycle is put in here.
- Keep area
- Redo log buffer cache
- Keeps track of changes done with the statements "insert / update / delete / create / alter / drop". - Large pool (Optional - size defined by parm LARGE_POOL_SIZE)
- Used to do backups / restores and for keeping session memory for MTS config - Java pool (Optional - size defined by parm JAVA_POOL_SIZE)
- Shared pool (size defined by parm SHARED_POOL_SIZE)
- PGA (Program Global Area - private - allocated for each server process)
- Global temp tables are perhaps allocated in here?- Sort area
- If shared server are used, this is allocated from the SGA (not into )
- contains as well the locks.
- size determined by (SORT_AREA_SIZE / SORT_AREA_RETAINED_SIZE)
- If the sort is bigger than SORT_AREA_SIZE 1) data is divided into groups that fit 2) they are sorted separately 3) then the results are saved in the user's temp tablespace and 4) the final result is generated by merging all the separate sorts. - Session info
- - *only if dedicated servers are used. If shared servers are used, it's in the SGA.
- Stack space (session vars)
- *only if dedicated servers are used. If shared servers are used, it's in the SGA.
- Sort area
The SGA must fit in "real" memory. To calculate it use DB_CACHE_SIZE, LOG_BUFFER, SHARED_POOL_SIZE.
Background processes
- DBWn (Database writer):
- Writes dirty buffers to disk from buffer cache.
- Started as default only 1 DBWn. Max is 10 (0->10). See DB_WRITER_PROCESSES.
- Writes are done when:
- server process needs more space in the buffer cache.
- LGWR wants to write (checkpoint)
- Every 3 seconds because of a timeout.
- Nbr of dirty buffers reaches a limit.
- Writes are done as well when a table is dropped or truncated.
- LGWR (Log writer):
- reads from redo log buffer to online redo log files.
- Writes from buffer to disk when:
- commit
- when redo buffer is 1/3 full
- when DBWn writes
- every 3 seconds
- when there is 1MB of redo log records
- Min is 2 online redo log files (writes are done in a circular way).
- CKPT (Checkpoint writer):
- reduces time needed for instance recovery.
- SMON (System monitor):
- Does instance or crash recovery at DB-startup, or when files or tablespaces are brought back online.
- Cleans up temp segments.
- Coalesces space (only if PCTINCREASE of the tablespace is set to nonzero)
- PMON (Process monitor):
- Cleans up failed user processes and frees up resources of those processes.
- ARCn (Archiver):
- Active only if DB is in archivelog mode and automatic archiving is enabled (LOG_ARCHIVE_START = TRUE)
- Copies online archive log files to other locations.
- Max is 10
- Started by LGWR
- RECO (recoverer)
- Started only if "distributed transactions" is on.
- Resolves "in-doubt" transactions (?).
- LOCK:
- Used for inter-instance locking if "real application clusters" are used.
- RAC is 1DB -> n instances
- QMNn (Queue monitor):
- Monitors message queues.
- Max is 10 (see parm AQ_TM_PROCESSES)
- Dnnn (Dispatcher):
- Create at least 1 dispatcher for each network protocol used in oracle.
- Snnn (Shared server process):
- Min is value of parm SHARED_SERVERS
- Max is value of parm MAX_SHARED_SERVERS
Server processes
There are 2 ways of setting them up:
There are 2 ways of setting them up:
- Dedicated servers
Every user who connects to Oracle has its own process which reads from disk and puts this into the buffer cache. Theas means more memory & cpu consumption. Each time a connection is established, the system has to create a dedicated server process. - Shared servers (MTS - multithreaded server)
Many users are served by one or few server processes.
usr process -> dispatcher (perhaps more than 1) -> queue -> shared server process
usr process -> -> shared server process
The "dispatchers / queues / shared server processes" exist always - don't need 2b created when somebody connects (see "Snnn" background process).
Where is the user session information?
Shared servers are used -> control structs of shared pool (MTS architecture).
Dedicated servers are used -> control structs of PGA.
Shared servers are used -> control structs of shared pool (MTS architecture).
Dedicated servers are used -> control structs of PGA.
OFA (Optimal / Oracle flexible architecture)
Advantages:
- Good performance.
- Flexible, so that db growth is easy.
- Encourages consistent file-name conventions. Easy to distinguish files & associate them to their tblspaces.
- Keeps contents of tblspace separate (e.g. idx & tblspace are separate). Fragmentation & i/o contention are low.
- Supports multiple "oracle_home" => multiple oracle releases (7,8,9,10) can run concurrently.
- Possible to have 1 listener for multiple "oracle_home".
- Admin infos of each db is separate.
Advantages:
- Good performance.
- Flexible, so that db growth is easy.
- Encourages consistent file-name conventions. Easy to distinguish files & associate them to their tblspaces.
- Keeps contents of tblspace separate (e.g. idx & tblspace are separate). Fragmentation & i/o contention are low.
- Supports multiple "oracle_home" => multiple oracle releases (7,8,9,10) can run concurrently.
- Possible to have 1 listener for multiple "oracle_home".
- Admin infos of each db is separate.
When I create the DB, two loginIDs are created... ID Initial password SYS change_on_install SYSTEM manager ...and as well the role "DBA" (has all admin privileges).
2 authentication systems:
- OS
- Set "remote_login_passwordfile" to "none" (default)
- The unix user must be a member of the oracle OSDBA or OSOPER unix groups. In Unix, usually it is the "DBA" group.
- The sysoper (or osoper) privilege can perform startup, shutdown, ecc... (pag. 81).
- The sysdba (or osdba) privilege can perform everything that osoper/sysoper can do, plus create database, all system privileges with "admin option" and time-based recovery.
- Can connect by issuing "connect / as sysdba" (or "as sysoper").
- No user is needed in the Oracle DB. - Password file
- The user needs to have the appropriate privileges in Oracle.
- Do the following to setup a pwdfile authentication:
- Create a pwdfile and define a pwd for the SYS user:
"orapwd file=<filename> password=<pwd> (password that SYS will have) [entries=<maxusers>] - Set "remote_login_passwordfile"
- The pwds for the users SYS and INTERNAL can always be defined.
- Exclusive: - File can be used only for 1 instance.
- Can add more users to the file that will have SYSDBA or SYSOPER
- Shared: - File can be used for multiple instances.
- No other users than SYS and INTERNAL can be specified. - Grant to the users the SYSDBA/SYSOPER privileges -> they're added to the pwdfile.
- When I connect to the DB using the SYSDBA privilege I connect to the SYS schema.
- When I connect to the DB using the SYSOPER privilege I connect ot the PUBLIC schema.
- See view V$PW(D)FILE_USERS to know who has SYSDBA/SYSOPER privileges and is in the password file.
- Start/Stop an instance -> SYSDBA/SYSOPER privilege is needed.
- Objects created by people logging in as sysdba or sysoper will be owned by SYS.
Starting a DB
- Start the instance
- Mount the DB
- Open the DB for normal use (default)
Command "startup" | Command "alter" | Remarks |
---|---|---|
startup nomount | Only views that read from the SGA can be accessed. | |
startup mount | alter database mount | Control files are opened. |
startup open read only | alter database open read only | No redo infos are generated |
startup open | alter database open | |
startup force | DB wasn't shut down properly. It will shutdown the DB and restart it if it's already running. It's like shutdown abort + startup open | |
startup restrict | alter database <enable/disable> restricted session | Allow only users with "restricted session" privilege. Already connected users stay connected. |
Command "shutdown" | Remarks |
---|---|
shutdown normal | Waits until all users disconnect. |
shutdown transactional | Waits until all active transactions are committed or rolled back. No new trans are allowded. |
shutdown immediate | Immediately rolls back all transactions. |
shutdown abort | will NOT rollback active transactions -> instance recovery during startup is needed |
startup force | shutdown abort + startup open. |
Files
Parameter
- $ORACLE_HOME/DBS is its default location.
- init<sid>.ora is the default name.
- Contains u.a. the dbname, the location of control file, location archived log files, if ARCx process has to be started, location of dump & trace files.
- spfile must be created from a pfile.
- v$parameter -> current session
- v$system_parameter -> system parameters
- use "alter system/session" to change them.
- init<sid>.ora is the default name.
- Contains u.a. the dbname, the location of control file, location archived log files, if ARCx process has to be started, location of dump & trace files.
- spfile must be created from a pfile.
- v$parameter -> current session
- v$system_parameter -> system parameters
- use "alter system/session" to change them.
alter system set <parm>=<value> scope = | spfile | Takes effect only after next restart. |
memory | Takes effect immediately but is not saved. | |
both |
- parm. CORE_DUMP_DEST: available only in unix
- parm. USER_DUMP_DEST: writes deadlocks, user session errors.
- parm. BACKGROUND_DUMP_DEST: writes the alert log file ("alter_<sid>.log") which lists:
- non-default init parms
- DB startups & shutdowns
- DB archiving & recoveries
- modifications to tablespaces, rollback segments, data files.
- parm. DB_CREATE_FILE_DEST & DB_CREATE_ONLINE_LOG_DEST_n (OMF is active if at least 1 of them is specified.
- DB_CREATE_FILE_DEST: Default location for new datafiles (file will be "ora_[?].dbf"). Can be changed with "alter system".
- DB_CREATE_ONLINE_LOG_DEST_n (max is 5): Locations of online redo log files ("*.log") and control files ("*.ctl"). Can be changed with "alter system".
- If only one of the directories is there, control files will be saved in there. Don't know what happens, when all 3 are specified (probably db_create_online... will be used as it is probably multiplexed (a minimum of 2 is required for rollback round_robin).
- No control file is required to create the db. They're created when I create the db.
- Recommanded are 2 control files on 2 disks.
- parm. CONTROL_FILES: Default location for the control files.
- It contains:
- db name (a control file belongs 2 only 1 db)
- db creation timestamp.
- data files name, location, on/offline status. If offline, its scn.
- redo log files name & location and redo log archive infos (updated by ARCn).
- tblspace names
- current log sequence number (updated in the controlfile by LGWR)
- current checkpoint nbr (updated by CKPT).
- start & and of undo segments.
- RMAN info
- Its size is determined by:
- MAXDATAFILES
- MAXINSTANCES
- MAXLOGFILES
- MAXLOGMEMBERS
- MAXLOGHISTORY
- Oracle recommends a minimum of 2 control files (max is 8). There are 2 ways of multiplexing them: init.ora & spfile.
- init.ora:
- shutdown db
- cp or mv files
- change parm "CONTROL_FILES"
- restart db (If the db is not created yet, it will create & multiplex all ctrlfiles specified in "CONTROL_FILES". The 1st file of the list is used for reading - others are just multiplexed.)
- spfile:
- "alter system set control_files = <filename> scope=spfile"
- "shutdown normal" (shutdown the db)
- copy an existing file to the new location
- startup (start the instance)(OMF-managed control files are created if "CONTROL_FILES" is not mentioned, but DB_CREATE_ONLINE_LOG_DEST_n.)
- init.ora:
- Re/create control files (if I want to change "MAX" limits, db name or I lost old control files):
- Prepare names of data files and redo log files.
- Prepare script as pag. 149 (See option NORESETLOGS (missing datafiles are showed after restart as "MISSINGnnnn" in v$datafile. Media recovery will reinsert them), as they're already existing and shouldn't be changed). The script can be prepared as well with "alter database backup controlfile to trace" (file must be edited because comments have to be deleted before being able to use it) or "alter database backup controlfile to <filename> reuse".
- Shutdown db
- startup nomount
- Run script
- alter database open
- Shutdown db and take backup.
- Query control file information:
- names and status of controlfiles: v$controlfile
- names of controlfiles: show parameter control_files
Redo log
- LGWR writes when:
- A user commits (a SCN is assigned to the redo records)
- Redo log buffer is 1/3 full
- There is ~1MB of changed data in the buffer
- Each db has its own online redo groups (RAC: oracle keeps track to which instance the entries belong).
- Statements:
- create / alter:
Statement | Statement | Remark |
---|---|---|
create database xxx | logfile [<group 1>] [fname] size 10M, logfile [<group 2>] [fname] size 10M; | Redo log group 1+ 2 are created. |
create database x logfile | [<group 1>] ('/blah1a.log', '/blah1b.log') size 10M, [<group 2>] ('/blah2a.log', '/blah2b.log') size 10M | Redo log groups 1 + 2 are multiplexed. |
alter database add logfile | (group 3) ('/blah3a.log', 'blah3b.log') size 10M | |
alter database add logfile | '/blah4a.log' reuse | Reuses an existing file. Uses size of the already existing file. |
alter database add logfile | member '/blah4b.log' to group 4; | |
alter database add logfile | member '/blah4b.log' to ('/blah4a.log'); | Searches in which group those members are used and adds it to that group. |
- drop groups:
- Check in V$LOG if the group is being used ("active"). If yes, issue "alter system switch logfile".
- alter database drop logfile group x.
- delete file from disk manually
- drop & re-add:
- alter database clear logfile group x
- Rename members:
- Shutdown db (do full backup).
- cp / rename files
- startup mount
- alter database rename file <old> to <new>;
- alter database open;
- Backup ctrl file.
- drop & re-add:
Log switch:
- The file actively written is called the "current" log file.
- Log files required to recover the instance are called "active". The others are "inactive".
- Log switches are mentioned in the alert log file.
- Log switches occur:
- When the log is switched (file is full) => log sequence nbr is allocated to the new file.
- alter system switch logfile;
- Log switches cause a checkpoint:
- Header of data files & ctrl files updated by CKPT.
- Blocks are written to disk by DBWn.
- A checkpoint happens when:
- There is a log switch (redo log file is full).
- Shutdown
- Tblspace is put to read-only or backup mode.
- Tblspace or datafile is taken offline.
- alter system checkpoint;
- alter system switch logfile;
- by parm FAST_START_MTTR_TARGET (says how long a db recovery has to take).
Set "log_checkpoints_to_alert = true" to trace them.
Archive logs
- They're copied by ARCn to a different location. LGWR waits for ARCn.
Statement (the first OR the second) | Statement | Remarks |
---|---|---|
alter system set | LOG_ARCHIVE_DEST | <= mandatory |
LOG_ARCHIVE_DUPLEX_DEST | (optional) | |
alter system set | LOG_ARCHIVE_DEST_n <mandatory/optional> [reopen <time seconds to wait>] | Max is 10. mandatory means that it has absolutely to write in there. reopen means it retries if it is busy. |
Remark: LOG_ARCHIVE_MIN_SUCCEED_DEST is ignored if less than "n".
- Set archivelog:
- Shutdown db.
- Change parms.
- startup mount;
- alter database archivelog;
- alter database open;
- Unset archivelog:
- Shutdown db.
- startup mount;
- alter database noarchivelog;
- alter database open;
Archivemode info: "archive log list" or v$database
Tablespaces and data files
- The datafiles I specify upon db creation are assigned to the system tablespace.
- For dictionary-managed tables, if its extents are allocated or freed, undo infos are generated as the dictionary tables are changed. No undo for locally managed tables.
- (OMF) If I create a tablespace without giving filenames, the value of DB_CREATE_FILE_DEST will be used.
- Parameters:
- Default value for INITIAL and NEXT:
Parm | Default value | Minimum value |
---|---|---|
INITIAL | 5 db blocks | 2 blocks if it's dict.managed. 3 blocks if it's locally managed. |
NEXT | 5 db blocks | 3 db blocks. |
- PCTINCREASE
Default = 50.
- MINEXTENTS:
Default = 1
Extents allocated at table creation.
Calculated using NEXT & PCTINCREASE
- LOGGING (redo log):
Default = active.
For ddl.
- SEGMENT SPACE MANAGEMENT:
Default = manual.
auto (use bitmaps and not free lists)
Makes sense using it only for locally managed tblspaces.
- Locally managed tblspaces cannot have the parms:
- default storage
- temporary
- minimum extent
- SEGMENT SPACE MANAGEMENT:
- AUTO
- Bitmaps are used.
- Parameters that are ignored: FREELISTS / FREELIST GROUPS / PCTUSED.
- AUTO
- Transform the type of tblspace (extent allocation method (local or dict.-managed cannot be changed):
- Dictionary-managed tblspaces (if they're empty):
permanent <=> temporary - Locally managed tblspaces:
permanent <= temporary
- Dictionary-managed tblspaces (if they're empty):
- Statements for taking a tablespace offline (sqls are rolled back when they're online again):
Statement | Remark |
---|---|
alter tablespace x offline normal | All datafiles must be online. |
alter tablespace x offline temporary | Media recovery might be needed for offline datafiles. A checkpoint on datafiles is done. |
alter tablespace x offline immediate | No checkpoint -> media recovery is needed |
- Statement for taking a tablespace online: alter tablespace x online;
Blocks
Parameters for data blocks
- PCTFREE:
Parameters for data blocks
- PCTFREE:
- Default = 10
- PCTUSED:
- Default = 40
- Cannot be used with index
- INITRANS:
- Default = 1 (data) or 2 (indexes & clusters)
- It's the number of concurrent DML transactions in the worst case.
- If too low, space will be allocated in the block itself and not in the header and will never be released.
- MAXTRANS:
- Max number of concurrent transactions.
Extents
- If locally managed:
- If locally managed:
- Storage parms are ignored.
- Uniform (e.g. "extent management local uniform size 512k") or variable (managed by oracle)
- Are made of contiguous blocks.
Segments
- v$sort_segment: Displays only infos of segments saved in temporary tblspaces.
- A partitioned table has more than 1 segment.
Undo segments
- v$sort_segment: Displays only infos of segments saved in temporary tblspaces.
- A partitioned table has more than 1 segment.
Undo segments
- Undo is created before update / delete / insert (with rowid) rows.
- Operations done in the undo segments are saved as well in the redo logs.
- Segments are freed with transaction end, but not destroyed (see snapshot too old error).
- The undo segment for system tblspace is always created at db creation.
- Automatic undo management:
- undo_management: auto (manual is possible) (cannot be dynamically altered)
- undo_tablespace: default is SYS_UNDOTBS (can be dynamically altered)
- Only 1 tblspace at a time can be used.
- "create undo tablespace x datafile y size 15M"
- "alter system set undo_tablespace=x"
- Transactions have to be complete in order to drop the undo tblspace.
- Parm UNDO_RETENTION (seconds) specifies for how long freed undo segments have 2b kept.
- Error snapshot too old -> increase UNDO_RETENTION and eventually increase tblspace size.
- DBA_ROLLBAC_SEGS
Tables
- "segment space management auto" => following are not used: PSTUSED / FREELISTS / FREELIST GROUP
- Oracle recommends: FREELISTS = INITRANS
- In partitioned tables, each partition gets a segment.
- In partitioned tables, each partition can be put on a different partition.
- In partitioned tables, each partition can be set as read-only.
- Temporary tables:
- "segment space management auto" => following are not used: PSTUSED / FREELISTS / FREELIST GROUP
- Oracle recommends: FREELISTS = INITRANS
- In partitioned tables, each partition gets a segment.
- In partitioned tables, each partition can be put on a different partition.
- In partitioned tables, each partition can be set as read-only.
- Temporary tables:
- "create global temporary table on commit <delete rows /preserve rows>" -> is created in the user temp space.
- "delete rows" means that it is transaction based.
- "preserver rows" means that it is session based (rows will disappear after a disconnect + connect).
- "drop any table" privilege required to drop other user's tables.
- Set columns as "unused" makes the column invisible in the table definition.
- find migrated rows: pag. 297
Indexes
- Bitmap index stores rowids for null values.
- PCTUSED cannot be used.
- INITRANS should be higher than the one of the table.
- Stats have to be gathered in order to use function-based indexes.
- RBO doesn't use function-based indexes. Only CBO.
- "disable validate": Indexes that enforce the contraint is switched off. No DML is allowded.
- Constraint check :
- Bitmap index stores rowids for null values.
- PCTUSED cannot be used.
- INITRANS should be higher than the one of the table.
- Stats have to be gathered in order to use function-based indexes.
- RBO doesn't use function-based indexes. Only CBO.
- "disable validate": Indexes that enforce the contraint is switched off. No DML is allowded.
- Constraint check :
- The method cannot be changed with "alter table". Constraints have to be dropped and recreated.
- "deferrable initially <deferred / immediate>":
- "deferrable": user can change the behaviour (default is not deferrable).
- "initially deferred": do check at the end of the transaction.
- "initially immediate": do check at the end of the sql (default). - Set constraints: for the current transaction.
- "alter table xxx modify constraint initially <deferred / immediate": fixed in the table
User management
- The profile "default is created with the db (has "unlimited" for everything).
- Resource limits are enabled with "resource_limit = true" parameter (default is false).
- Parameters:
(*When limit is reached user can commit/rollback the previous transactions -> Afterwards he has to disconnect. The failed SQL is rolled back)
- The profile "default is created with the db (has "unlimited" for everything).
- Resource limits are enabled with "resource_limit = true" parameter (default is false).
- Parameters:
(*When limit is reached user can commit/rollback the previous transactions -> Afterwards he has to disconnect. The failed SQL is rolled back)
- CPU_PER_SESSION: 1/100 sec.
- CPU_PER_CALL: (for single sql) 1/100 sec
- LOGICAL_READS_PER_SESSION: data blocks from memory & disk.
- LOGICAL_READS_PER_CALL
- PRIVATE_SGA: Bytes (or KB or MB)
- CONNECT_TIME: minutes (total). Afterwards usr is disconnected & rollback.
- IDLE_TIME: minutes (continuous idle). Afterwards usr is disconnected & rollback.
- FAILED_LOGIN_ATTEMPTS: max consecutive failed logins -> account locked.
- PASSWORD_LOCK_TIME: days
- PASSWORD_LIFE_TIME: days -> dba has to reset the password.
- PASSWORD_GRACE_TIME: days before password expiration that a warning is issued.
- PASSWORD_REUSE_TIME: days a pwd cannot be used again after changing it.
- or
- PASSWORD_REUSE_MAX: number of pwd changes before a pwd can be used again.
- PASSWORD_VERIFY_FUNCTION: function to be used for pwd verification.
- Days can be specified as a fraction (e.g. 1/24 means 1 hour).
- "alter user xxx account unlock"
Composite limit
- Value 0 means "not to be considered".
- Following values can be set:
- CPU_PER_SESSION
- LOGICAL_READS_PER_SESSION
- CONNECT_TIME
- PRIVATE_SGA
- To modify the costs: "alter resource cost <which one> <value>". The ones not modified won't change.
- To add composite limit to a profile: "alter profile xxx composite_limit 100000".
- Example: LOGICAL_READS_PER_SESSION 10 / CONNECT_TIME 2 => (500'000x10)+(10x2) =5'000'020
- To change other resources in the profile: "alter profile xxx limit <what> <how>"
- Assign profile to users: "alter user yyy profile xxx".
- Views:
- DBA_PROFILES
- USER_RESOURCE_LIMITS
- USER_PASSWORD_LIMITS
- RESOURCE_COST
- Minimal settings:
-> User has to have a space quota (not for temp. tblspaces).
-> User has to have a space quota (not for temp. tblspaces).
Check if a default temp. space is defined for the db: DATABASE_PROPERTIES
Account lock is default.
At least the "create session" privilege is needed in order to connect.
- ORA_ENCRYPT_LOGIN: Eventually setz it to true - but only on the client machine. - DBLINK_ENCRYPT_LOGIN: Eventually setz it to true - but only on the client machine. - OS_AUTHENT_PREFIX:
Default is OPS$
E.g. OS login is STEF => db must contain the user OPS$STEF => "connect /"
"create user OPS$STEF identified externally.
- REMOTE_OS_AUTHENT (default is false): can connect from any machine, if this is set to true!!!!!
- LICENSE_MAX_SESSIONS: when the limit is reached, connect only with "restricted session" privilege is possible.
- ALL_USERS: user creation date and ID.
- DBA_USERS: encrypted pwd
- To save a password which is already encrypted: "create user xxx identified by values 'jkfsduerum'"
- Shutdown & startup is done by connecting as follows: "connect <username> as sysdba".
- LICENSE_MAX_SESSIONS: when the limit is reached, connect only with "restricted session" privilege is possible.
- ALL_USERS: user creation date and ID.
- DBA_USERS: encrypted pwd
- To save a password which is already encrypted: "create user xxx identified by values 'jkfsduerum'"
- Shutdown & startup is done by connecting as follows: "connect <username> as sysdba".
Privileges
- PUBLIC: it's a user group. All users belong to it.
- User and a role cannot have the same name.
- DML can be granted on a column.
- Even with the DBA privilege, I have to have "with grant option" to grant to other users.
- Revoke of only "with admin privilege" is not possible. Revoke the whole privilege and reassign it.
- Not possible to revoke single column privileges => revoke table and reassign.
- Granted SYSTEM "with admin option" is NOT revoked from other people when original is revoked.
- Granted OBJECT "with admin option" IS revoked from other people when original is revoked.
- PUBLIC: it's a user group. All users belong to it.
- User and a role cannot have the same name.
- DML can be granted on a column.
- Even with the DBA privilege, I have to have "with grant option" to grant to other users.
- Revoke of only "with admin privilege" is not possible. Revoke the whole privilege and reassign it.
- Not possible to revoke single column privileges => revoke table and reassign.
- Granted SYSTEM "with admin option" is NOT revoked from other people when original is revoked.
- Granted OBJECT "with admin option" IS revoked from other people when original is revoked.
Audit
- Audit table is SYS.AUD$
- Enable auditing AUDIT_TRAIL=TRUE (or DB) => inserted into table.
- Enable auditing AUDIT_TRAIL=OS => inserted into os file.
- Types of auditing:
- Audit table is SYS.AUD$
- Enable auditing AUDIT_TRAIL=TRUE (or DB) => inserted into table.
- Enable auditing AUDIT_TRAIL=OS => inserted into os file.
- Types of auditing:
statement
privilege
object (audits as well select statements)
- AUDIT / NOAUDIT.
- Users with admin privileges, when they connect, are audited in the os files.
- Users with admin privileges, when they connect, are audited in the os files.
Globalization support
- Default is US7ASCII (7 bytes -> 128 values)
- "alter database character set x":
- Default is US7ASCII (7 bytes -> 128 values)
- "alter database character set x":
X:
must be a superset of the current charset, otherwise it's not possible.
UTF8 / WE8ISO8859P1 are supersets of US7ASCII and support all european languages.
- Parm. national character set: used in NCHAR / NVARCHAR / NCLOB
- Parm. national character set can be:
- Parm. national character set can be:
AF16UTS16 (default) (cannot be used as db charset)
UTF8
- AF16UTF16:
- AF16UTF16:
2 bytes
UTF-16
- AF32UTF8 / UTF8 / UTFE:
UTF-8
1 or 2 or 3(asia) bytes.
- NLS_SORT: language used in order to perform sorting.
- NCHAR can use only UTF8 & AL16UTF16.\\
- NCHAR can use only UTF8 & AL16UTF16.\\