I want to get the stale stats for Table resides at APPS schema. Is there is any table or view exists to get the details like DBA_STALE_STATS or anything? Currently I am checking LAST_ANALYZED column from DBA_TABLES?
Why do we gather table statistics manually ?Is it because of database performance.
I know In Oracle Database 10g, Automatic Optimizer Statistics Collection reduces the likelihood of poorly performing SQL statements due to stale or invalid statistics and enhances SQL execution performance by providing optimal input to the query optimizer.
Optimizer gathers statistics when 10% table rows have been changed.
if there are any problems with updating statistics? My manager told me that they had big performance issue when they updated the statistics of whole database. Is this a rare case?
I know the advantages of updating statistics,but are there any dis-advantages?
I use the following query to find out the remaining time to complete the table statistics which is running currently.
SELECT SID, SERIAL#, opname, SOFAR, TOTALWORK,username,context, ROUND(SOFAR/TOTALWORK*100,2) COMPLETE FROM V$SESSION_LONGOPS WHERE TOTALWORK != 0 AND SOFAR != TOTALWORK order by 1;
SOFAR column shows 9325 and totalwork column shows 12287.How to calculate the columns in terms of hours and minutes ?
And also sometime the query never shows the output of the query for the current running query.
I am connect to remote database with a user named 'TEST', this user has dba privileges. I am not able to gather the statistics of neither test schema nor for any table that exists in this schema.
SQL> EXEC dbms_stats.gather_schema_stats('TEST', cascade=>TRUE); BEGIN dbms_stats.gather_schema_stats('TEST', cascade=>TRUE); END; * ERROR at line 1: ORA-44004: invalid qualified SQL name ORA-06512: at "SYS.DBMS_STATS", line 13210 ORA-06512: at "SYS.DBMS_STATS", line 13556 ORA-06512: at "SYS.DBMS_STATS", line 13634 ORA-06512: at "SYS.DBMS_STATS", line 13593 ORA-06512: at line 1
I am looking at a performance issue at the moment and trying to replicate on a test system. I am initially looking at the impact of upto-date statistics on the main schema's objects.
For this I wanted to:
first run the batch with whatever stats were present in the database Flashback the db to before the batch . Gather stats Re-run the batch with updated stats and compare results.
However, I inadvertently ran the stats job before running the load the first time! I have the SCN from when the environment was set up like production (ie before the stats were run) so am I correct in saying that if I flashback to this point then the stats will be "old" and I can just run the batch then? I know I can verify this when I Flashback the database by looking at LAST_ANALYZED on tables etc but it would be good to know this before hand as it's a 12 hour batch.
application was not starting due to some pending transaction in database.Pending transactions were rolled back by DBA team.To avoid such situations what I thought was, having a job that will call a procedure that will monitor table status everyday and will send the mail. Now that job is working fine for no pending transactions in DBA_PENDING_TRANSACTIONS.
But now I am in doubt if someday there are PENDING TRANSACTIONs in the table DBA_PENDING_TRANSACTIONS, will SELECT * FROM DBA_ PENDING_ TRANSACTIONs query will work as normal or this whole process of monitoring table and sending mail will work fine?
I am tring to create a query about finding our pending purchuase requestions without PO, we have 4 tables, 2 for PRs & 2 for POs, you can not creat a PO without having a PR,the problem that I tried to find out the pending PRs for all projects and PRs but the query is taking so much time and computer is hanging , is their any way to generate it grouped by project number and PR number and include material in each PR, below is description of all tables related:
PRs table:
MASTER: -------------------- PRNUMBER TYPE MADEBY PROJECTNUM [code]....
Find the date difference. I need to find that how many days the task is pending, if ACT_NAME field switching from 'SET PENDING%' to 'RESUME PENDING%' by using ACTIONTAKENDATETEXT field in the History table.
Example as needed: NoPendingDays = 23 (8+15)
I have attached Create table and Insert table values sample as SQL file.
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production.
Using Oracle 11g's compression feature in production? I haven't read anything negative yet, that doesn't meant that there isn't anything that could have an adverse affect. I wanted to check to see if there are any affects on the performance or any disadvantages of using this compression feature. I have tested this on one my major tablespace and I did see a big difference in the reduce size on the tablespace but I am still hesitated to put this into production.
Requirement is format given multi line text in the specified lines and each line be of specified character. Here words should not be broken, instead they must come to new line if lines are available.
Function could of the sort: CREATE OR REPLACE FUNCTION wrap_text(p_text VARCHAR2 ,p_len NUMBER-- no of lines ,p_chr NUMBER--no of chr/line ) RETURN VARCHAR2;
I used the Exchange Partition feature to swap segments between 2 tables- one Partitioned, and one Non-Partitioned. The exchange went well. However, all the data in the partitioned table has gone to the partition which stores the maxbound values.
/** actual table names changed due to client confidentiality issues */
-- Drop the 2 intermediate tables if they already exist
drop table ordered_inv_bkp cascade constraints ; drop table ordered_inv_t cascade constraints ; /**
1st create a Non-Partitioned Table from ORDERED_INV and then add the primary key and unique index(s):
*/ create table ordered_inv_bkp as select * from ordered_inv ; alter table ordered_inv_bkp add constraint ordinvb_pk primary key (ordinv_id) ; -- create unique index ordinv_scinv_uix on ordered_inv_bkp( SCP_ID ASC,
[code]....
-- Next, we have to create a partitioned table ORDERED_INV_T with a similar
-- structure as ORDERED_INV.
-- This is a bit tricky, and involves a pl/sql code
-- Add section to set default values for the intermediate table OL_ORDERED_INV_T
FOR crec_cols IN ( SELECT u.column_name ,u.nullable, u.data_default,u.table_name FROM USER_TAB_COLUMNS u WHERE u.table_name ='ORDERED_INV' AND u.data_default IS NOT NULL ) LOOP
[code]....
-- Next, use exchange partition for actual swipe
-- Between ordered_inv_t and ordered_inv_bkp
-- Analyze both tables : ordered_inv_t and ordered_inv_bkp
BEGIN DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME => 'ORDERED_INV_T'); DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME =>'ORDERED_INV_BKP'); END; / SET TIMING ON;
I need to exclude a single schema from the autostats gathering feature in 11g. The tables in this schema are analyzed at the appropriate time via the application code. The autostats gathering job sometimes kicks in at a time in which the tables are getting updated or loaded which can skew explain plans during the updates/inserts.
I've searched through the oracle documentation and cannot find a way to simply "exclude" the schema without locking it. I see it is possible to disable the autostats at the entire db level but not at the schema level.
I am having Oracle 9i relaese 2 on my db server. I am getting the following error every time I try to create a bitmap index:-
ORA-00439: feature not enabled: Bit-mapped indexes
I have queried the v$option table .Here the value of parameter Bit-mapped indexes is FALSE.
The result of v$version is :-
Oracle9i Release 9.2.0.1.0 - 64bit Production PL/SQL Release 9.2.0.1.0 - Production CORE 9.2.0.1.0 Production TNS for Solaris: Version 9.2.0.1.0 - Production NLSRTL Version 9.2.0.1.0 - Production
Actually when we created the database our installation was halted . so we manually created the database using Create database command.
We have 10g physical standby set in our environment and we are migrating to 11g now. We want to use the active data guard feature of 11g to run the live reports on standby rather production. Questions I have is:
1) On our current 10g standby environment, we use db_name=cusms which is exactly matching with the production database name. I don't see we are using database_unique name on our standby. But I have read several blogs where everyone talks about using db_unique name on standby and db_name can be exactly matching with production on 11g. I wanted to know, is db_unique name a new requirement to have on 11g? can I go ahead and not use db_unique name and just have db_name exactly matching with production? What are the implications of doing so? The reason we want to stick to this is in-case of failover we want the database name to be the same. But I want to hear your thoughts on this:
2)While building standby, I did noticed few things and want your clarifications:
a.On standby database, should I mount instance using pfile or spfile or it doesn't matter? b. Lets say if I use either spfile or pfile, can I just have db_unique name in that file and just start the instance in no mount and do the duplicate from rman? c.As soon as my duplicate target database for standby from active database got finished, I usually exit the rman session and go to sqlplus and shutdown the standby instance. (Is this ok to do) d.Then I start the standby instance with startup (mount and open the database) this should open the standby database in read only mode. Following I issue alter database recover managed standby database using current logfile disconnect to put the database in recovery mode. (any steps missing here) e.Then go to primary and do few log switches and come back to standby to see if the primary changes moved to secondary or not.
But what I have observed is:
a. When I do the duplicate it runs successful. But during the course of duplicate, primary system generates few archives which are not shipped or applied on standby. When I go to standby to recover the database, it says media recovery needed and ask for archives files. I need to manually move this files from primary to standby to apply. Isn/t this automatically taken care? b. I also noticed after I can not open the standby database in read only mode after the duplicate command. While trying to open, it says database media recovery needed. What's the best procedure to open the database in read only mode immediately? c. On my standby init.ora lets say if I use db_unique name, where would my control file be place? Will oracle create controlfile from primary and put it on my standby database and put an update an entry into my pfile or spfile?
I have a table called "Subjects" which lists subjects to match with notations in another table I have created a simple sequence (CREATE sequence subjectid) to created the subject id for the table. But I notice that if there is a skip in the date, the sequence increments automatically when I am not even using it. It even appears to be incrementing even when I am not doing any database activity.
This is not an issue of data integrity, because the values in the subject_id column do not need to be sequential, they just need to be unique. But it really has me curious. I created another table called "keep_track" to keep track of what is happening: