Group Policy In VPD
Jun 19, 2013Give me some examples of creating grouped in VPD .
Group policy example for
1) Grouped policy on same table for different columns
2) Grouped policy on two different table for different columns
Give me some examples of creating grouped in VPD .
Group policy example for
1) Grouped policy on same table for different columns
2) Grouped policy on two different table for different columns
I have tried to implement RLS policy of oracle.I have two Schema X1 & X1_DBA.
I have created the emp table in X1_DBA create table emp(empid number,ename varchar2(10),deptno number) and inserted some rows into the Table. i have created the below function in X1_DBA schema & Given Select Privilege to X1.
CREATE OR REPLACE FUNCTION no_dept10(
p_schema IN VARCHAR2,
p_object IN VARCHAR2)
RETURN VARCHAR2
[code]...
When i Add the Policy in X1_DBA.schema i am getting the Error as Table does not exist
SQL> BEGIN
2 DBMS_RLS.add_policy
3 (object_schema => 'X1',
4 object_name => 'EMP',
[code]...
ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_RLS", line 20
ORA-06512: at line 2
I created policy as follow.
BEGIN
DBMS_RLS.ADD_POLICY (
object_schema => 'scott',
object_name => 'orders_tab',
policy_name => 'orders_policy',
function_schema => 'nisadmin_vpd',
policy_function => 'get_user_orders',
statement_types => 'select');
END;
/
After the creation of this policy for the table when I select the table Orders_tab then getting the below error.
ORA-00904: "ORA_ROWSCN": invalid identifier
When I dropped this policy then I can see the records of Order_tab table.
I'm running Oracle 9i on AIX 5.2. I'm not using a recovery catalog, nor am I using media management software. I perform a full, online rman backup of the database and archived redo logs daily to disk, then use operating system commands to copy the backup to tape. There is only space on disk for two days' backups, so I need to have a retention policy of "redundancy = 1", and run a "delete obsolete" prior to the backup. The problem is that I don't want to subject the archived redo logs to this retention policy.
I have two physical standby databases connected by WAN to the primary site, and I might need archived redo logs that are a few days (or more) old in the event of a prolonged WAN outage. I've read about the "keep forever" option, but apparently it isn't available without using a recovery catalog. Is there any way to spare the archived redo logs from my retention policy?
Note: I want to "protect" the actual archived redo logs from the retention policy, not the backups of the archived redo logs.
How we can use RMAN backup to restore oracle database when the RMAN backup pushed to tape already and being a OLD backup ( before retention policy date).Just to elaborate a more.. Say my retention policy is 3 days. I want to restore from a old backup like 30 days old.So surely the requesting backup is behind retention date and has been pushed to tape.
View 1 Replies View RelatedI'm on a Windows server 2003 R2 64 bit, database is 11.1.0.7. From RMAN connected to the target and the catalog, this is a show all;
RMAN> show all;
RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'F:oracleadmin
[code]....
I'm running incremental backups, a level 0 on sunday and a level 1 the other days of the week, this is the ctl file:
LEVEL 0:
connect target /@ORCL
connect catalog rman/rmanpw@rmancat
RUN
{
[code]....
The level 0 does delete the archive logs (because of the delete input). Each backupset has two copies of the archive logs (which is what I want), and I expected the logs to be deleted after being backed up 2 days (each day is twice, 2 days = 4 times). But it's not deleting the logs, even after 6 runs.
I Created the Security Policy as user sysadmin_vpd as follows
BEGIN
DBMS_RLS.ADD_POLICY (
object_schema => 'scott',
object_name => 'orders_tab',
policy_name => 'orders_policy',
function_schema => 'sysadmin_vpd',
policy_function => 'get_user_orders',
statement_types => 'select');
END;
After creation this policy I want to view & edit them.By default where they are saved.
1)I have created a complex view, and created an instead of trigger on this view
example:
create or replace trigger tr_x instead of before update on test_view_name for each row
-------
2) I have created an update policy on this view
DBMS_RLS.ADD_POLICY (
object_schema := schema_name,
object_name := name of the view,
policy_name := ploicy name,
function_schema := func schema name,
policy_function := pkg_test.fn_get_where,
statement_types := 'UPDATE',
update_check := TRUE,
policy_type := dbms_rls.dynamic);
3) function pkg_test.fn_get_where, which is used in the policy function always return 1 = 2, so that update should fail.
4) Now I will issue an update statement on the view test_view_name
update test_view_name set test_col = 1;
but still it updates the records, though update policy returns the where cluse 1 = 2
same where clause for select policy works perfectly fine.
I have a DR setup with the following configuration
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
I dont want to backup the STDBY DB but I want the ARC files to be removed when applied so my flash area does not fill up. Is there some command(rman or not) that can fire off this policy?
I'm trying to group sets of data based on time separations between records and then count how many records are in each group.
In the example below, I want to return the count for each group of data, so Group 1=5, Group 2=5 and Group 3=5
SELECT AREA_ID AS "AREA ID",
LOC_ID AS "LOCATION ID",
TEST_DATE AS "DATE",
TEST_TIME AS "TIME"
FROM MON_TEST_MASTER
WHERE AREA_ID =89
AND LOC_ID ='3015'
AND TEST_DATE ='10/19/1994';
[code]....
Group 1 = 8:00:22 to 8:41:22
Group 2 = 11:35:47 to 11:35:47
Group 3 = 15:13:46 to 15:13:46
Keep in mind the times will always change, and sometime go over the one hour mark, but no group will have more then a one hour separation between records.
I read that rownum is applied after the selection is made and before "order by". So, in order to get the sum of salaries for all employees in all departments with a row number starting from 1, i wrote :
select ROWNUM,department_id,sum(salary) from employees group by department_id
If i remove rownum, it gives the correct output. Why can't rownum be used here ?
Refer to the txt file to create table and insert data.
I executed the following query-
SELECT priority, detail, COUNT(1) FROM TEST GROUP BY priority, detail
and got the following result-
PRIORITYDETAIL COUNT(1)
StandardPatch 27
StandardInitial TSS 1
StandardInitial development 10
StandardProduction deployment5
High PriorPatch 1
Now I want that Initial TSS and Initial development should be combined as Initial together and I should get the result as follows:
PRIORITYDETAIL COUNT(1)
StandardPatch 27
StandardInitial 11
StandardProduction deployment5
High PriorPatch 1
Is there any way to add one disk group space to another disk group. Because One of my disk group is full i want to add space in to that group.
View 1 Replies View RelatedI'm using this code, and it performs fine, but I'm wondering if there is a more elegant way to do it--maybe with "ROLLBACK". Basically (as you can see) I need to get a normal count for each group but also for each group take a percentage of the total count (so all groups pct adds up to 100 (oh yeah, don't test for zero below, but just a test... )
select
c.Event,
c.code,
count(1) as calls,
total.total_count,
count(1) / total.total_count * 100 as pct_of_total
from
table1 c
[Code]....
[Edit MC: add code tags, do it yourself next time]
CREATE TABLE TEMP_GP_SEQ
(
COL1 NUMBER,
COL2 NUMBER,
COL3 NUMBER,
COL4 NUMBER,
COL5 NUMBER,
COL6 NUMBER,
COL7 CHAR (25)
);
[Code]...
I need to update the TEMP_GP_SEQ table and on the COLUMN COL7 based on the grouping range mentioned above using the sequence created. But the sequence should not be incremented for each and every record, it should be incremented only for change of groups. For example
SELECT *
FROM TEMP_GP_SEQ;
COL1 COL2 COL3 COL4 COL5 COL6 COL7
110100
110101
12101001
[Code]...
After update it should look like
SELECT *
FROM TEMP_GP_SEQ;
COL1 COL2 COL3 COL4 COL5 COL6 COL7
110100
110101
12101001M2
[Code]...
Actually the group by fetched two rows with a total record count of 4 (2 records in both the rows). SO the COL7 is updated for these four rows with M as a prefixed word and the number followed after M is from the sequence. Here the number is changing (sequence is incrementing) only when there is a change in grouping criteria and it remains the same throughout a group.
how to logically connect two tables. Here is an example of what I'm trying to do:
HOSTS TABLE CPU TABLE
----------- ---------
ID HOST ID CPU
01 host1 01 proc01
02 host2 01 proc02
03 host3 02 proc01
02 proc02
02 proc03
02 proc04
03 proc01
Based on the above, I can see that 'host1' has 2 CPUs, 'host2' has 4 CPUs, and 'host3' has 1 CPU. What I'd like to do is create a query that would output:
HOST CPU
----------
host1 2
host2 4
host3 1
I'm looping through the hosts and passing the them into another query as bind variables. That's slow and cannot be the best way to do this. I'm aware that I need to using a GROUP BY HAVING COUNT, but that doesn't seem to be working. It tends to return the total count of all CPUs rather than CPUs per host.
I have a query that extracts the row with a max(record_date) within a group.
select ssn, fund, type_indicator, annuitant, cur_year, record_date, guarantee
from LC_MORTALITY
where (ssn, fund, type_indicator, annuitant, cur_year, record_date) in
(select ssn, fund, type_indicator, annuitant, cur_year, max(record_date)
from LC_MORTALITY
group by ssn, fund, type_indicator, annuitant, cur_year);
the table has index that matches the group by clause exactly.
create index IDX_LC_MORTALITY_sftayd on LC_MORTALITY
(SSN,
FUND,
TYPE_INDICATOR,
ANNUITANT,
cur_year,
record_date
However, the plan ignores the index
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1963K| 241M| | 1375K (4)| 05:21:04 |
|* 1 | HASH JOIN RIGHT SEMI| | 1963K| 241M| 9701M| 1375K (4)| 05:21:04 |
| 2 | VIEW | VW_NSO_1 | 145M| 8038M| | 241K (12)| 00:56:22 |
| 3 | HASH GROUP BY | | 145M| 8038M| | 241K (12)| 00:56:22 |
| 4 | TABLE ACCESS FULL| LC_MORTALITY | 145M| 8038M| | 219K (3)| 00:51:13 |
| 5 | TABLE ACCESS FULL | LC_MORTALITY | 145M| 9840M| | 222K (5)| 00:51:49 |
---------------------------------------------------------------------------------------------
[code]...
It uses the index but still shows "table access full" in one place. Why is there still a full access?Can I do anything to optimize further?
My requirement is Data from a TableA has to be provided as an overall view
TABLEA
ID ENTITY REQ_FLG PAR_FLG EXT_FLG
CONV_1 ACCNT Y Y Y
CONV_1 PROD Y Y N
CONV_1 ADDR Y N N
CONV_2 DID Y N N
CONV_2 ORDER Y N N
Required to show the data in report as
ID Expand View_Report Populate ENTITY QRY_STATUS
CONV_1 Expand Report Populate
ACCNT Y Y Y
PROD Y Y Y
ADDR Y N N
CONV_2 Expand Report Populate
DID Y N N
ORDER Y N N
Where "Expand", "Report", "Populate" are provided as Hard coded values in query.
Sample Query.
SELECT ID
,'Expand' AS EXPAND
,'Report' AS VIEW_REPORT
, 'Populate / Reset' AS POP
, DECODE(MN_TBL.ENTITY,NULL,NULL,ENTITY) AS ENTITY
, REQ_FLG || ' ' || PAR_FLG || ' ' || EXT_FLG AS QRY_STATUS
FROM TABLEA
GROUP BY GROUPING SETS
((ID), (ENTITY, REQ_FLG , PAR_FLG , EXT_FLG ))
ORDER BY CONVERSION_ID, ENTITY
Above query works fine, where single ID is present
ID Expand View_Report Populate ENTITY QRY_STATUS
CONV_1 Expand Report Populate
ACCNT Y Y Y
PROD Y Y Y
ADDR Y N N
But when more than one ID is present the entire thing collapses
My weekly reports should start from Thursday and it ends on Wednesday . I need to aggregate data from Thursday to Wednesday. At the end of every month i generate report for the last 4 weeks.
I couldn't find a way to group by correctly. When I make group by trunc(fragment_date,'day')+2 and check the data it aggregates data from Monday to Sunday.
SELECT COUNT(c.country_id)
FROM countries c,
employees e,
locations l,
departments d
[Code]....
I really dont know when and where to use the group by statement from the query above when I added C.country name before the word from I'm always getting this error "ORA-00979: not a GROUP BY expression"
This time, I am going to provide the DML statements.
I have a simple table with 3 fields in it.I want to group by ACCT_NUMBER and sum of BALANCE field. But I want to get the description of the first row in a group.
the statements below. Here there are two groups of records 2001 and 2002. My sql(which I am working on) should return the following :
2001 EMPL TRAINING-MIS 20
2002 OTHER PROF SERV-HR 40
The following query will group by ACCT_NUMBER and sum of the BALANCE field. But how can I get the DESCRIPTION?
SELECT ACCT_NUMBER, SUM(BALANCE)
FROM TEST
GROUP BY ACCT_NUMBER
CREATE TABLE "TEST"
("ACCT_NUMBER" VARCHAR2(20 BYTE),
"DESCRIPTION" VARCHAR2(20 BYTE),
[code]...
I am having trouble selecting the row that has max(pay_period) of 13. Here is some test data.
CREATE TABLE TESTME
(
SSN VARCHAR2(11 BYTE),
PAY_PERIOD VARCHAR2(3 BYTE),
PAY_YEAR NUMBER,
KRONOS_ID VARCHAR2(6 BYTE),
LAST_NAME VARCHAR2(15 BYTE),
FIRST_NAME VARCHAR2(14 BYTE),
ADJ_SALARY NUMBER
)
[code]....
I need suing group by for getting count.I have a table with columns below
SYS_AUDIT_IDSYS_AUDIT_PROG_IDPROG_FINDING_ID_COUNT
178921652
178921641
178921631
179321521
179321511
179321501
179321491
179321461
I want to count number of SYS_AUDIT_PROG_ID for each audit and count of PROG_FINDING_ID_COUNT
I want to get
1789 3 4
I tried this query but this is not working
[code]select sys_audit_id ,count (sys_audit_prog_id), count(prog_finding_id_count) from
my_table sub
group by sys_audit_id [/code]
I have a table OS_CURRENTSTEP , and OS_WFENTRY
CREATE TABLE OS_CURRENTSTEP
(
ID NUMBER,
ENTRY_ID NUMBER NOT NULL,
STEP_ID INTEGER NOT NULL,
ACTION_ID INTEGER,
OWNER VARCHAR2(20 BYTE),
START_DATE DATE,
[Code]...
I need count of step_id from os_currentstep wh
here is the query
select count(step_id), step_id from os_currentstep where owner='Marty' group by step_id
this gives me the count I also need the name associated with this step_id from table OS_WFENTRY
, I cannot query name from step_id ,
count the no: of emp working under each manager? and instead of manager number display the manager name
View 5 Replies View RelatedI am new to PL/SQL, could you pls let me know how to solve this requirement using PL SQL
C-A | C-B
A123 | -1
B334 | 4
B567 | 2
B333 | -1
T777 | 2
Y774 | 3
T879 | 4
T654 | 3
T474 | 0
Y432 | -1
I need the output like this
C-1|C-2|C-3|C-4
3 |7 |3/10|7/10
Column-1 (Count of rows where Column-B = -1)
Column-2 (Count of rows where Column-B in(0,1,2,3,4)
Column-3 (Column-1/Total rows)
Column-4 (Column-2/Total rows)
Here is the scenario with examples. Big table 333 to 500 million rows in the table. Statistics are gathered. Histograms are there. Index is not being used though. Why?
CREATE TABLE "XXFOCUS"."some_huge_data_table"
( "ORG_ID" NUMBER NOT NULL ENABLE,
"PARTNERID" VARCHAR2(30) NOT NULL ENABLE,
"EDI_END_DATE" DATE NOT NULL ENABLE,
"CUSTOMER_ITEM_NUMBER" VARCHAR2(50) NOT NULL ENABLE,
"STORE_NUMBER" VARCHAR2(10) NOT NULL ENABLE,
"EDI_START_DATE" DATE,
[Code]...
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SELECT num_rows FROM user_tables s WHERE s.table_name = 'some_huge_data_table';
NUM_ROWS
----------
333338434
SQL> SELECT MAX(edi_end_date)
2 FROM some_huge_data_table p
3 WHERE p.org_id = some_number
4 AND p.partnerid = 'some_string';
MAX(EDI_E
---------
13-MAY-12
Elapsed: 00:00:00.00
SQL> explain plan for
2 SELECT MAX(edi_end_date)
3 FROM some_huge_data_table p
4 WHERE p.org_id = some_number
5 AND p.partnerid = 'some_string';
Explained.
SQL> /
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 2104157595
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 22 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 22 | | |
| 2 | FIRST ROW | | 1 | 22 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN (MIN/MAX)| some_huge_data_table_PK | 1 | 22 | 4 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
SQL> explain plan for
2 SELECT MAX(edi_end_date),
3 org_id,
4 partnerid
5 FROM some_huge_data_table
6 GROUP BY org_id,
7 partnerid;
Explained.
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3950336305
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 44 | 1605K (1)| 05:21:03 |
| 1 | HASH GROUP BY | | 2 | 44 | 1605K (1)| 05:21:03 |
| 2 | TABLE ACCESS FULL| some_huge_data_table | 333M| 6993M| 1592K (1)| 05:18:33 |
-------------------------------------------------------------------------------
Why wouldn't it use the index in the group by? If I write a loop to query for different partnerid (there are only three), the whole things takes less than a second.
btw, I gave the index hint too. Didn't work. Version mentioned in the example.
below is the create and insert script.
create table sal_group (g_id number(10),sal_rand number(20));
/
INSERT INTO sal_group values(1,5000)
/
INSERT INTO sal_group values(1,2000)
/
INSERT INTO sal_group values(1,4000)
[code]...
/I have tried
SELECT g_id,sal_rand,
case when row_number() over (order by g_id) = count(*) over () then 'yes' else null end as row_check
FROM sal_group output:
G_ID SAL_RAND ROW_CHECK
1 4000
1 2000
[code]....
select T2.name,sum(T1.area)
from (select * from country) T1,T2i wrote very simple just to show you the problem, bu i think here we should add "group by".
but is there any way to avoid group by?because i don't want to group any thing!
oracle11g
I have the below requirement,
i got a table(deptid, deptname, address,city,zip, state, other columns) i want to write a query to determine any error(records with different values) because i expect all records grouped-by(deptid, deptname, address,city,zip, state) to have the same deptid(pls note that deptid isnt unique),
or a specific deptid should have only one record based on the grouping (deptid, deptname, address,city,zip, state),