SQL & PL/SQL :: DBMS_STATS For Table Level Vs Partition Level
Nov 17, 2010
what is the difference on DBMS_STATS for table level and partition level , which will provide the best optimizer . If the table xxxx is partitioned from 1 to 10 ,then running gather stats on table xxxx as whole table level or partition level which will provide best result on the performance.
I want to write a sql query which will fetch the data from manual_temp_master and manual_temp_detl.But from manual_temp_detl table, Price_bkt_cds columns should be displayed as columns. Like the should look like as below:
Can you take an incremental backup level 1 or level 0 without archivelogs?
syntax would bebackup as compressed backupset cummulative level 1 database.
The reason I ask is because when I run backup as compressed backupset cummulative level 1 database plus archivelogs # it runs fine, but when I run backup as compressed backupset cummulative level 1 database it just hangs.
between statement level or row level trigger, which trigger will execute first.We have BEFORE_UPDATE_ROWLEVEL_TRIGGER and BEFORE_ UPDATE_ STATEMENT LEVEL_TRIGGER triggers on table product.
If we provide an object filter list in dbms_stats.gather_database_stats specifying the partition name to be analysed , then the partition doesn't gets analysed.The version details are as below
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Solaris: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 -
ProductionAs already pointed out ,the version 11.2.0.3 is already having a bug for dbms_stats.gather_schema_stats where the object filter list doesn't work.Is this behaviour also a bug ?
can a table level check constraints have conditional checking (if else clause or case conditional structures) and checks which are limited through something like a where clause which inside the table level check constraints.And can a table level check constraints refer to a column in another table column which should have a the same value.
I have a transaction table with some custom properties and two status columns. There are 2 different applications(.Net and Pl/SQL Procedure) using the table. Both the process run parallel and fetch records one by one, perform some calculation and update the status column.
There are likely more chances that both applications will fetch the sane record and try to update the same row. This will cause a lock. Can i use row level lock before update by each application? Or is there any other methods/process in which this can be handled. ?
I want to display result with respective levels. for example p21 ,p30 are coming under first level . p22,p25 ,P31,P32are 2nd level. p23,p24,p26,p27 are 3rd level p28,p29 are FOURTH level item_id's.
Already I 'VE tried using CONNECT BY PRIOR clause.BUT STILL I COULDN'T GET THE RESULT.
I am unable to understand why row level triggers cant be used in mutating tables.
If you need to update a mutating table, you could bypass these restrictions by using a temporary table, a PL/SQL table, or a package variable. For example, in place of a single AFTER row trigger that updates the original table, resulting in a mutating table error, you might use two triggers--an AFTER row trigger that updates a temporary table, and an AFTER statement trigger that updates the original table with the values from the temporary table.
I have a result set with three columns as 'Product Category', 'Product' & 'QtySales' and 10 rows, sorted in the order Product Category, Product. This means, a product category will have one or more products under it.
Now i want to add a fourth column to my result set, which should display a incremental number sequence from starting from 1, 2, 3.. for each row. Also when the value of the Product Category (1st column) changes, this sequence should be restarted again from 1.
INSERT INTO NODE_LVL VALUES('TBL_APL','TBL_AFL'); INSERT INTO NODE_LVL VALUES('TBL_APP','TBL_ACS'); INSERT INTO NODE_LVL VALUES('TBL_ADD','TBL_ADW'); INSERT INTO NODE_LVL VALUES('TBL_ADP','TBL_ADV'); INSERT INTO NODE_LVL VALUES('TBL_AOP','TBL_AOV'); [code]......
Table 'TBL_APP' is having 2 parent nodes i.e 'TBL_AOV' and 'TBL_ADV' SELECT * FROM node_lvl WHERE child_node = 'TBL_APP';
At level 5 there is duplicate nodes i.e 'TBL_APP' and 'TBL_ACS' as parent_node and child_node respectively.
SELECT PARENT_NODE, CHILD_NODE, LEVEL FROM NODE_LVL START WITH PARENT_NODE = 'TBL_ACF' CONNECT BY PRIOR CHILD_NODE = PARENT_NODE;
I want to suppress such duplicates. So I added DISTINCT
SELECT DISTINCT PARENT_NODE, CHILD_NODE, LEVEL FROM NODE_LVL START WITH PARENT_NODE = 'TBL_ACF' CONNECT BY PRIOR CHILD_NODE = PARENT_NODE;
BUT requirement is to maintain the same order (of hierarchy) as it was before adding DISTINCT.
I have created one master detail block..Master having form layout only one record and detail block is of tabular type having 10 record set
I need to do that .as master block enters entry and switch to detail block at the detail block 1st record should come with some default values which property need to use..
SQL> SELECT FISCAL_TIME_ID, DATA_ID, M_VALUE, 2 SUM(m_value) OVER (PARTITION BY fiscal_time_id, data_id 3 ORDER BY FISCAL_TIME_ID) AS YTD_VALUE 4 from test11;
I have written trigger to satisfy the requirement, when i write row level it comes out with mutating error on table which is very obvious, i have change it to statement level which is fine but it doesn't satisfy the requirement.
I want to capture the id column , which rows got inserted or updated .
Is there any way to get the newly inserted or updated rows as in case of row level i can get it form :new.id.
Below is what I am trying to find from the above data.
col1Unalloc Alloc Metric 13450.16 1.07 1.3
the result is Hours.Minutes.
I am trying to minus the two dates and find the time taken for a particular section( eg, unalloc ).
I tried using three union queries .
Select block,timestamp From ( Select 'Unalloc' As Block ,col1,col4 From table_1 Where col2 = 'Batch Started' And col3 = 'proc_unalloc' And Parms Like '1345' Union Select 'Unalloc' As Block ,col1,col4 From table_1 Where col2 = 'Unalloc Ended' And col3 = 'proc_unalloc' And
[code].....
I will minus the start and end time for unalloc to get time taken for un alloc. Then i will minus end time of unalloc and end time for alloc to get time taken for alloc... like wise for metric.
Is it possible that we restrict user at data level? For Example 'A' user can only query employeess of deptno 10 only. He can not query employees of others dept.
I have gathered frequency histogram manually on one of my column of a table to provide more information to optimizer for better calculation of cardinality.
Now i have my weekend job runs for gathering stats on schema level with method_opt as 'For all column size repeat'. But i don't want the stats of above column to be overridden by the stats job. I don't want to lock the statistics of whole table, but i just want to lock the column level stats for this table.