11g R2 - ADDM Recommendation In Any Table?
Jun 22, 2012on 11g R2, are ADDM recommendation in any table ? If yes which one ?
View 5 Replieson 11g R2, are ADDM recommendation in any table ? If yes which one ?
View 5 RepliesI am trying to generate (Oracle 10g) ADDM and ASH report using sql query in HTML format. Just like below given AWR report.
e.g. :
SELECT output
FROM
TABLE
(dbms_workload_repository.awr_report_html
(37933856,1,2900,2911 )
);
We are not using enterprise manager. Now, i want to generate ADDM report for the particular SQL stmt. How can i do that?
View 2 Replies View RelatedI have been trying to develop a script for generating ADDM Reports every hour and save it in a directory on the server. I was able to develop a script to run the AWR Reports for every hour and save them in a directory, but I ran into troubled waters in the ADDM script.
Database Version : Oracle Database Enterprise Edition 10.2.0.3
OS : IBM AIX 5.3
I'm trying to debug the script below to generate ADDM reports on a per hour basis and save them in a folder as well as mail than to a particular entity.
########################################################################################
# Set up Oracle environment variables...
#------------------------------------------------------------------------------
export PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/oracle/app/oracle/product/10.2.0/bin:/usr/bin/X11:/sbin:.:/oracle/app/oracle/product/10.2.0/bin
export ORACLE_HOME=/oracle/app/oracle/product/10.2.0
export ORACLE_SID=sarcotest2.ora
export ORAENV_ASK=NO
[Code].....
Initially, the script did return me an ADDM report, but the problem was that it generated the report for only a set of two snapshots (eg: 410110 and 410111). later when the snapshots advanced further, it still generated the same report for the same snapshots.
Now, it seems it have made several changes to the script in the declare and dbms_advisor section and the reports are not being generated anyway.
One problem I am facing is
ORA-13605: The specified task or object ADDM_UAT does not exist for the current user.
ORA-06512: at "SYS.PRVT_ADVISOR", line 2043
ORA-06512: at "SYS.DBMS_ADVISOR", line 560
ORA-06512: at line 1
It seems the task is not being generated at all in my user's schema.
In my production environment, I have a mostly working Oracle 12c Cloud Control environment, managing several database instances. On all of the databases, I am unable to use the Compare period ADDM feature ( Instance > Performance > AWR > Compare Period ADDM).
When I select that menu option, I see the message "To be able to use this feature some PL/SQL packages need to be loaded into the target database's monitoring schema, DBSNMP." I have been searching for further information in the specific packages that need to be loaded per the message, but neither Orafaq, [URL] nor Google seem to have those details.
As there was a database performance, we started to analyse the issue by running ADDM using below.
@?/rdbms/admin/addmrpt.sql.
The result of running ADD is below.
DETAILED ADDM REPORT FOR TASK 'TASK_64718' WITH ID 64718
Analysis Period: 10-APR-2012 from 06:00:15 to 07:00:22
Database ID/Instance: 324353546/1
Database/Instance Names: JACK/jack1
Host Name: fdsfggwe001
[code].....
There was no significant database activity to run the ADDM.The database's maintenance windows were active during 100% of the analysis period.The analysis of I/O performance is based on the default assumption that the average read time for one database block is 10000 micro-seconds. An explanation of the terminology used in this report is available when you run the report with the 'ALL' level of detail.
Current parameters set in database are below
statistics_level ---- string
timed_statistics --- boolean
1.Do i need to set the initialization parameter to some other value?
2.should i run the below command before executing the ADD report ? exec dbms_workload_repository.create_snapshot();
I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
I have a requirement to import text files which are generated from 3d modelling software xsteel where it records all geometric information and i want to import this information into oracle table.
CREATE TABLE dstv_head ( wo_no VARCHAR2(12),struct VARCHAR2(12),rev_no NUMBER,
mark VARCHAR2(12),pos VARCHAR2(12),grade VARCHAR2(12),qty NUMBER,PROFILE VARCHAR2(24),TYPE VARCHAR2(12),
len NUMBER,width_web NUMBER,width_bottom NUMBER,flange_thk NUMBER,web_thk NUMBER,radius NUMBER,kgm NUMBER,
kgm1 NUMBER,kgm2 NUMBER,bevel_plus NUMBER,bevel_minus NUMBER,holes_yn VARCHAR2(1),holes_v_yn VARCHAR2(1),
hole_x_dim NUMBER,hole_y_dim NUMBER,hole_dia NUMBER,no_of_holes NUMBER)
-- All the data which has to go under specific field for example **9005.nc1 will go into wo_no field, 1239401A will go under struct.
ST
** 9005.nc1 --WO_NO
1239401A - STRUCT
1 -REV_NO
9005 -MARK
9005 --POS
S275JR --GRADE
2 --QTY
[code]....
primary key constraint on transaction_dtl_bk is affecting the insertion of next correct rows.
CREATE OR REPLACE PROCEDURE NP_DB.san_po_nt_wnpg_1 (
dt DATE
)
IS
v_sql_error VARCHAR2 (100); -- added by sanjiv
v_sqlcode VARCHAR2 (100); ---- added by sanjiv added by sanjiv
[code]...
Oracle 10g, Windows XP
There is an interface table and there is an normal transcational table..interface table is being compared with normal table and if match found the result is dumped into another normal table.
I am using two cursors one is to query the interface table and in a for loop pass the results to the second cursor..The interface table is having 5000 + rows and the transcation table is having more than 3.7 millions ..and the program is taking lots of time to execute..took almost 35-45 minutes..
create table x_interface /* INterface table */ -- 5000 + rows
( name varchar2(80), addr_line1 varchar2(35), addr_line2 varchar2(35), addr_line3 varchar2(35),
addr_line4 varchar2(35), addr_line5 varchar2(35), addr_line6 varchar2(35), suffix varchar2(35),
city varchar2(15), state varchar2(10), zcode varchar2(10))
[code]....
creating an sql script that can update info from one table in dbase1 to another table in dbase2 that has the same columns and if possible insert date and time in one column when the synchronized is done?
View 3 Replies View RelatedI am trying to execute dynamic SQL in Stored Function and I don't know how to do this.
Explanation:
In the function I am calling pr_createtab is procedure which will create a physical table and return the table name in the out variable v_tbl_nm.
I need to query on this dynamic table and return the result as return result. But i am not able to do it.
Here T_web_loylty_report_table is a type.
CREATE OR REPLACE function CDW_DSS.f_ReturnTable(i_mrkt_id in number, i_cmpgn_year in number)
return T_web_loylty_report_table is
v_tbl_nm varchar2(50);
i_cntry_cd varchar2(20);
v_sql_str varchar2(32567);
[code]......
We have to load 10 million rows in a table from another table based on the multiple joins. How much tablespace size we allocate to the table and for performance point of view how much should be the SGA size.
View 11 Replies View RelatedWe have a table in the client database that has two columns - column parent and column child. The whole hierarchy of DB table dependencies is held in this table.If Report 1 is dependent on Table A and Table A in turn is dependent on two tables Table M and Table N. Table N is dependent on table Z it will appear in the db table as,
Hierarchy Table
Parent Child
Report1Table A
Table ATable M
Table ATable N
Table NTable Z
Requirement :
From the above structure, we need to build a table which will hold the complete hierarchy by breaking it into multiple columns.The o/p should look like this
-ParentChild 1Child 2 Child 3
-Report1Table ATable M
-Report1Table ATable N Table Z
Child 1, Child 2, Child 3 ....and so on are columns.The number of tables and the no of hierarchical relationships are dynamic.
SQL Statements to create hierarchy table:
create table hierarchy (parent varchar2(20), child varchar2(20));
insert into hierarchy values ('Report1','Table A');
insert into hierarchy values ('Report1','Table B');
insert into hierarchy values ('Table A','Table M');
insert into hierarchy values ('Table B','Table N');
insert into hierarchy values ('Report2','Table P');
insert into hierarchy values ('Table M','Table X');
insert into hierarchy values ('Table N','Table Y');
insert into hierarchy values ('Report X','Table Z');
Approached already tried :
1) Using indentation : select lpad(' ',20*(level-1)) || to_char(child) P from hierarchy connect_by start with parent='Report1' connect by prior child=parent;
2)Using connect by path function :
select *
from (select parent,child,level,connect_by_isleaf as leaf, sys_connect_by_path(child,'/') as path
from hierarchy start with parent='Report1'
connect by prior child =parent) a where Leaf not in (0);
Both the approaches give the information but the hierarchy data appears in a single column.Ideally we would like data at each level to appear in a different column.
what command is used to create a table by copying the structure of another table including constraints ?
View 2 Replies View RelatedI've 2 table with below colums
Create table rent (customer_id number(10),
doc_num varchar2(20)
);
Create table doc_id (doc_num varchar2(20));
Insert into rent(customer_id) values (1);
Insert into rent(customer_id) values (2);
Insert into rent(customer_id) values (3);
Insert into rent(customer_id) values (4);
[code]...
Now my requirement is i need to assign doc_num from doc_id table to 4 customers in rent table randomly. I mean update doc_num in rent table from doc_id table randomly. how to write update statement.
There is a requirement to make a table data in a database (eg: HR database) available in another database (eg: EMP database), instead of accessing it using database link. In EMP database(where data needs to be cloned), data will only be queried and no write operation will be done. Data in remote database (eg: HR DATABASE) will be occassionally fully truncated and reinserted. The plan is to do a similar truncate and reinsert of data (from HR database) into EMP database monthly once using dbms scheduler job. So basically data in just one table needs to be cloned in another database.
Question: For this situation, is a regular table or Materialized view the right choice to clone the table in EMP database and why? The table in HR database (remote database) is not very big.
We deleted millions of records from a table.
1.Is it necessary to reorganize a table and index after the deletion of records from table ? Because i see some change in table size after table and index reorganization.
2.Will re org table and index improve the database performance ?
Oracle 11gI have a large table of 125 million records - t3_universe. This table never gets updated or altered once loaded, but holds data that we receive from a lead company. I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads - that will be updated with regard to when the lead is mailed and for other relevant information. select records from this 125 million record table to insert into the smaller table.
I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches. My current attempt has been to create a View using the query that selects the records as shown below. Then use a second query that inserts into T3_Leads from this View V_Market. This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause? My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key.
CREATE VIEW V_Market asWITH got_pairs AS ( SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */ l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no , ROW_NUMBER () OVER ( PARTITION BY l.address_key ORDER BY l.hh_verification_date DESC ) AS r_num FROM t3_universe e JOIN t3_universe l ON l.address_key = e.address_key AND l.zip_code = e.zip_code AND l.p1_gender != e.p1_gender
[code]....
I want to do an import of a table from my old dump file.The same table is already there in the development box but few more columns are added to that table while testing so in the dump those columns are not available.
TABLE_EXISTS_ACTION=TRUNCATE
The new table
SQL> desc "TESTINVENTORY"."TTRANSACTION"
Name Null? Type
----------------------------------------------------------------------------------- -------- --------------------------------------------------------
TRANSACTIONIDNOT NULL CHAR(26)
BRANCHCODE NOT NULL CHAR(3)
EXTERNALSYSTEM NOT NULL CHAR(3)
EXTRACTSYSTEM NOT NULL CHAR(3)
OWNERBRANCHCODE NOT NULL CHAR(3)
TRADEREFERENCE NOT NULL CHAR(20)
[code]...
It giving error while doing an import.
I have got two tables emp_dtl and iou_tab. i have already made entries i.e booking no, emp_cd, emp_name etc in emp_dtl snc its my master table. I want to retrieve the booking nos through lov in iou_tab which are generated in emp_dtl and corresponding info of emp_cd and emp_name should come in the respected fields in iou_tab.
View 1 Replies View RelatedI have a staging table and a target table. How do I pull in last loaded data from staging table to target table?
View 4 Replies View RelatedI stumbled about some weird 11gR2 behavior (running on AIX).When I performed a join between a table with user based content (parts belonging to an sourcing scope) and a base table (parts available) whereas the parts have to fulfill a special regular expression, it showed that the same query is faster when using outer join than inner join (about 0.7sec vs. 20sec; which makes me believe that regexp_like works wrong when involved in an inner join).
i tried the same statement with a standard like (but not fulfilling the same condition).This time performance was as expected (inner join outperforming outer join).
Oracle version information
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
[code]...
I can see it, the execution plan for the "inner join" doesn't show so much more costs than the one for the outer (but why at all is does an inner join cost more?) ...The execution plan for both "not like" is the same and (surprisingly ;-) ) similar to "outer-regexp".
I hope sample data are not needed as there would be needed a lot...this is the second time I came across the "plan worse but execution time better" phenomenon.
I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). how I will be able to truncate/drop partition from this table? IF I change column types from nested table to varray type, will it work?
Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
I have a table with a BLOB column ;
I want read data from table and insert to another table with a cursor
My code is :
procedure read_data is
cursor get_data is
select id,image from picture1;
id1 number;
pic blob;
begin
open get_data;
[code]....
when I run form , error FRM-40734 occurred
error in line " fetch .... "
I have the below data in table test_1.
select * from test_1
IDNameTotal
-----------
1A100
2B100
3C100
4D100
test_2 table contains the concatination of ID's with comma seperated. Actually in this table ID column is of datatype varchar2.
select * from test_2
ID
----
1,2,3
My requirement is to select the data from test_1 table where the id values in this table exists in test_2 table. I tried with the belowselect statement, but could not get any data.
SELECT * FROM test_1 WHERE to_char(id) IN (SELECT id FROM test_2)
create table test_1 (id number, name varchar2(100), total number)
create table test_2(id varchar2(100))
insert into test_1 values (1,'A',100)
insert into test_1 values (2,'B',100)
insert into test_1 values (3,'C',100)
insert into test_1 values (4,'D',100)
My scenario is I need to insert into History table when a record is been updated into a tabular form(insert the updated record along with the additional columns Action_by,Action_type(Like Update or delete) and Action Date Into History table i.e History table contains all the records as the main table which is been visible in tabular form along with these additional columns ...Action_by,action_type and action_date.
So now i dont want to create a befor/after update trigger on base table rather i would like to create a generic procedure which will insert the updated record into history table taking the page alias and pade ID as the parameters(GENERIC procedure is nothing but whcih applies to all the tabular forms(Tables) contained int he application ).
I am having two tables
Table 1 having 16 cror rows .
Table 2 having 1000 rows
I joined both the tables and fetch all inforamtion from big table for those key present in small table.Join query taking more time to fetch the rows .
if a table contains two columns and both are part of the primary key of that table (Kind of obvoius).
should i opt for a index organized tbale in this case ?Or should i opt for another running sequential colum which would serve as a primary key of this table and define the actual two columns of the system as unique keys.
there is a drawback if a most of the tables of a database contain composite primary keys?
I need to insert data in Table A from Table B where most of the fields are identical and might some of the fields will be more in Table A.
ex: Table A: a,b,c,d,e,f
Table B: a.b,c,g,h
How to insert this using user_tab_columns in cursor and if I am giving the i/P as my table names . This needs to be configurable and reusable rather i mention all the fields in my logic.