Is it possible to create trigger on the various tables and views exists (i.e. dynamic performance views) in data dictionary, when ever any DML operations performs by Oracle it self?
I am removing sal column from table tab_emp; i want to check whether any materialized view or view using this column by querying using data dictionary :- if i use like condition against query column of all_mviews it is throwing error sicne it is long data type. is there a way to search it without creating any function and use it in a query.
Question 1) I have read the following statement in a PL/SQL book.
Quote:To check whether an existing procedure is compiled for native execution or not, you can query the following data dictionary views:
[USER | ALL | DBA]_STORED_SETTINGS [USER | ALL | DBA ]_PLSQL_OBJECTS
However, I when i query the view USER_PLSQL_OBJECTS I get the following error message:
Quote:ORA-00942: table or view does not exist
Question 2) I have read the PLSQL_WARNING can be set to DEFERRED at the system level. However, I am unable to defer it. tell me how to apply defer caluse to following statement:
Quote:ALTER SYSTEM SET PLSQL_WARNINGS ='DISABLE:ALL'
I wanna to DROP a table called EMPLOYEES, but when I execute the DROP TABLE EMPLOYEES, I get a ERROR saying that I cant do it because this table do reference to another table(s).
I tried to use the DBA_CONS_COLUMNS and DBA_CONSTRAINTS data tables, but its not enough to find it.
In order to improve the performance of our live server, I am trying to do an exhaustive comparison with our test environment which is quite quick in spite of the fact that we port the data from Live every month.
There are no obviously slow queries appearing in the the top SQLs of AWR, we have optimised such things already. Right now it is about general uplift rather than SQL based tuning.
I picked up random SQLs and I noticed a marked differences in the execution time. Typically they are 3 to 4 times and there are cases much more than that.
1. I observed that, while the explain plan of the queries are same, trace of the queries give a different picture. I have observed that the recursive calls, consistent gets and sorts(memory) are quite high on Live. 2. I have no solid reasons to say this but my instincts tell me that the recursive calls is the major contributing factor. It is sometimes 2000+ for an SQL. 3. On googling more on that, it finally made me compare the data dictionary on the AWR report of test and Live.
The dc_objects caught my eyes. In that 4 hour AWR, there were about 10 million get requests and the pct miss was ~10. For similar load, the test server had 5 million gets with 0.08 PCT miss for 4 hours.
I am trying to perform a dml operation(insert) to insert data into a table. I created a procedure in which a insert statement is generated. In another procedure I am trying to execute this auto generated statement using execute immediate. For that I created 2 variables . First one contain the insert statement and other one contain the list of columns. I m passing these strings to execute immediate like this
Execute Immediate(v_query) using(v_col_list)
after execution i m getting the following error
ORA-01008: not all variables bound ORA-06512": at ' db_name.load_data' , line no 217
but when i tried it with list of column into using clause it is working like
Execute Immediate(v_query) using col1,col2,col3
I have more than 150 tables in the schema and i am creating a single procedure to load the data into base tables by using external tables. By passing the table name to procedure it generates the update and insert statements but i am getting error while executing the statements with dynamic sql.
I have a created a materialized view which is based on a view on remote database. Now how do I refresh the view.
Materialized view is created by
CREATE MATERIALIZED VIEW mv_employee_name AS SELECT EMPLID, EMPL_NAME FROM VEMPDATA@REMOTEDB WHERE REGION = 'US';
I am wondering how the refersh happens or how do I specify the refresh clause.REFRESH FAST option is looking for VIEW LOG on the master table but in this case its a remote view, so I cannot create any object on remote db.
SQL> select SEGMENT_NAME, SEGMENT_TYPE, BLOCKS, EXTENTS, BYTES/1024 2 from user_segments 3 where SEGMENT_TYPE='TABLE' 4 ORDER BY SEGMENT_NAMe; [code]....
The countries table is missing in user_segments data dictionary view.But I can queries the countries using select statement.
SQL> SELECT * FROM COUNTRIES;
CO COUNTRY_NAME REGION_ID -- ---------------------------------------- ---------- AR Argentina 2 AU Australia 3 BE Belgium 1 BR Brazil 2 [code]...
why the country table is missing in user_segments data dictionary table.
BANNER Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production "CORE10.2.0.5.0Production" TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production
See attached file for creation script and data load.Each staff member is required to complete at least one task every three years. The source table contains an EID (aka User ID) and a date column for each task with a date of when the task was completed. If a task has never been started/completed the date value is "NULL".
The result set will show the EID, date of latest task completed and if the task was completed within the last 3 years from given date (for example June 30, 2012).
I've got a Source Data in complex relational formFor reporting purposes, a simpler, less normalized data model is neededThere are two Target views from the Source Data: one of them with full access to all datathe second one with access only to a subset of the data (same columns, but not all the records)For both target groups, a separate schema shall be available, each containing only relevant dataToday, these schemas are physically located on the same DB instance and host as the source dataA daily refresh is sufficientA later relocation of the reporting schemes to other DB instances shall be possible without major changes neededOracle 10g should be used I tried to accomplish this using Materialized Views (Materialized seems better since there will be sometime a need to have all the apropriate data somewhere else, geographically, AND it provides Complete Refresh from the Source), but there is a problem: when creating the MV there is a possibility to type 'SELECT *' - but after execution it changes into real columns names. It is important because later after adding a new column into Source Data it WILL NOT appear in MV after refresh.
I also thought about Data Guard, Streams and RAC, but I think only in the Materialized Views you may choose the data to show (rows, columns).
I have a set of rows based on a complex view from multiple table.
I will be updating some of its columns from front-end . Is there any possible ways to lock those rows of data while updating and no other users can update it;
I have 3 reporting tables with 2.2 million records each being rebuilt nightly. The data is used online 24/7 by users and thus, snapshot tables are being built from the refreshed reporting tables. The current method to do this:
delete from snapshot table; insert into snapshot table (select * from report table); <repeat for other 2 tables> commit;
This seems to me to be resource intense on the system even though the table is defined with nologging option.
Is it better to create a MV (select only with refresh complete on demand)? The query is very simple without joins so it at first seems like overkill. However, I am also seeing that dbms_mview.refresh allows for an atomic option. Thus, if 1 of the 3 MVs fails during refresh all 3 rollback, which is a nice feature.
Are there better ways to replicate a snapshot table that I've missed? Is a delete and insert strategy a bad idea?
As we know that date datatype can store both date part and time part. If I specify the Date format for my database as 'DD-MM-YYYY HH@$:MI:SS' can i ensure i anyways for a particular columns in the database containing date values the format is 'DD-MM-YYYY' i.e without the time part.Can we specify seperate date formats for specific columsn in database during table creation?
I have to convert some existing materialized views (fast refresh) to partition materialized views.
Database version is oracle 10.1.0.4. I have decided to use on prebuilt table option to do the partitioning as it minimizes the time to transfer from the master site.
1) stop replication 1) create interim tables with similar structure as the materialized views 2) transfer all data from the materialized views to the interim tables 4) script out the materialized views structure and add in on prebuilt table option in the scripts 5) drop the materialized views 6) rename the interim tables with the same name as the materialized views 7) run the scripts to create the materialized views with on prebuilt table option 8) refresh the newly created materialized views -> it should take a short time since I am using on prebuilt table option
But I am facing one major issue. That is if I drop the materialized views, the materialized view logs of the master tables are purged. When the materialized views are refreshed fast, there are some data missing. the data that are purged out when the materialized view are dropped.
Do you happen to know other ways that existing materialized views can be converted to partitioned materialized views? Do you have any workaround to prevent the materialized view logs from being purged?
I want to get data for month to date. For example, If I pass today or any day date as parameter then i should get data for that month(month of passing date) up to passing(parameter) date. As well as i have to get year to date.For example, If I pass today or any day date as parameter then i should get data for that financial year(year of passing date) up to passing(parameter) date. how to get month to date and year to date data.